2025-03-10 23:16:34.340745 | Job console starting... 2025-03-10 23:16:34.353670 | Updating repositories 2025-03-10 23:16:34.430344 | Preparing job workspace 2025-03-10 23:16:35.876313 | Running Ansible setup... 2025-03-10 23:16:40.725132 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-03-10 23:16:41.433049 | 2025-03-10 23:16:41.433203 | PLAY [Base pre] 2025-03-10 23:16:41.465094 | 2025-03-10 23:16:41.465226 | TASK [Setup log path fact] 2025-03-10 23:16:41.499445 | orchestrator | ok 2025-03-10 23:16:41.523443 | 2025-03-10 23:16:41.523640 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-10 23:16:41.569189 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:41.587910 | 2025-03-10 23:16:41.588124 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-10 23:16:41.651709 | orchestrator | ok 2025-03-10 23:16:41.663294 | 2025-03-10 23:16:41.663419 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-10 23:16:41.708699 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:41.725727 | 2025-03-10 23:16:41.725884 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-10 23:16:41.751813 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:41.767266 | 2025-03-10 23:16:41.767422 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-10 23:16:41.793004 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:41.812821 | 2025-03-10 23:16:41.812978 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-10 23:16:41.838804 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:41.864037 | 2025-03-10 23:16:41.864154 | TASK [emit-job-header : Print job information] 2025-03-10 23:16:41.929884 | # Job Information 2025-03-10 23:16:41.930096 | Ansible Version: 2.15.3 2025-03-10 23:16:41.930138 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-03-10 23:16:41.930176 | Pipeline: post 2025-03-10 23:16:41.930204 | Executor: 7d211f194f6a 2025-03-10 23:16:41.930229 | Triggered by: https://github.com/osism/testbed/commit/af7b5875124ec115185ac1bea08af6619a635d52 2025-03-10 23:16:41.930254 | Event ID: 1dbe7dde-fdec-11ef-8229-d4ae7b0c4880 2025-03-10 23:16:41.940500 | 2025-03-10 23:16:41.940632 | LOOP [emit-job-header : Print node information] 2025-03-10 23:16:42.098777 | orchestrator | ok: 2025-03-10 23:16:42.099028 | orchestrator | # Node Information 2025-03-10 23:16:42.099075 | orchestrator | Inventory Hostname: orchestrator 2025-03-10 23:16:42.099110 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-03-10 23:16:42.099140 | orchestrator | Username: zuul-testbed06 2025-03-10 23:16:42.099169 | orchestrator | Distro: Debian 12.9 2025-03-10 23:16:42.099196 | orchestrator | Provider: static-testbed 2025-03-10 23:16:42.099223 | orchestrator | Label: testbed-orchestrator 2025-03-10 23:16:42.099251 | orchestrator | Product Name: OpenStack Nova 2025-03-10 23:16:42.099279 | orchestrator | Interface IP: 81.163.193.140 2025-03-10 23:16:42.121496 | 2025-03-10 23:16:42.121641 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-03-10 23:16:42.597861 | orchestrator -> localhost | changed 2025-03-10 23:16:42.617353 | 2025-03-10 23:16:42.617557 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-03-10 23:16:43.655414 | orchestrator -> localhost | changed 2025-03-10 23:16:43.682114 | 2025-03-10 23:16:43.682255 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-03-10 23:16:43.978409 | orchestrator -> localhost | ok 2025-03-10 23:16:43.994438 | 2025-03-10 23:16:43.994632 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-03-10 23:16:44.045662 | orchestrator | ok 2025-03-10 23:16:44.066082 | orchestrator | included: /var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-03-10 23:16:44.075025 | 2025-03-10 23:16:44.075125 | TASK [add-build-sshkey : Create Temp SSH key] 2025-03-10 23:16:45.181997 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-03-10 23:16:45.182600 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/work/b1e5c5de2ce3410bae8409c63759374d_id_rsa 2025-03-10 23:16:45.182710 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/work/b1e5c5de2ce3410bae8409c63759374d_id_rsa.pub 2025-03-10 23:16:45.182780 | orchestrator -> localhost | The key fingerprint is: 2025-03-10 23:16:45.182846 | orchestrator -> localhost | SHA256:UUNb+hR4cpgaVf9QDTIvymt7Ph0Wgropzda07UT+XQo zuul-build-sshkey 2025-03-10 23:16:45.182914 | orchestrator -> localhost | The key's randomart image is: 2025-03-10 23:16:45.182978 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-03-10 23:16:45.183031 | orchestrator -> localhost | | o==* ..o| 2025-03-10 23:16:45.183082 | orchestrator -> localhost | | ..==o* ..| 2025-03-10 23:16:45.183130 | orchestrator -> localhost | | .oo=o + | 2025-03-10 23:16:45.183178 | orchestrator -> localhost | | .o.+...o | 2025-03-10 23:16:45.183226 | orchestrator -> localhost | | S.o o. ..| 2025-03-10 23:16:45.183274 | orchestrator -> localhost | | . .+ o | 2025-03-10 23:16:45.183321 | orchestrator -> localhost | | o =ooEo ..| 2025-03-10 23:16:45.183371 | orchestrator -> localhost | | . *.oo+o.o.| 2025-03-10 23:16:45.183419 | orchestrator -> localhost | | o .+o.o .| 2025-03-10 23:16:45.183514 | orchestrator -> localhost | +----[SHA256]-----+ 2025-03-10 23:16:45.183638 | orchestrator -> localhost | ok: Runtime: 0:00:00.587955 2025-03-10 23:16:45.204815 | 2025-03-10 23:16:45.205003 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-03-10 23:16:45.258720 | orchestrator | ok 2025-03-10 23:16:45.272947 | orchestrator | included: /var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-03-10 23:16:45.283883 | 2025-03-10 23:16:45.283984 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-03-10 23:16:45.308555 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:45.317491 | 2025-03-10 23:16:45.317599 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-03-10 23:16:45.853976 | orchestrator | changed 2025-03-10 23:16:45.872907 | 2025-03-10 23:16:45.873095 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-03-10 23:16:46.128478 | orchestrator | ok 2025-03-10 23:16:46.139439 | 2025-03-10 23:16:46.139572 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-03-10 23:16:46.566370 | orchestrator | ok 2025-03-10 23:16:46.574439 | 2025-03-10 23:16:46.574614 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-03-10 23:16:46.959075 | orchestrator | ok 2025-03-10 23:16:46.968504 | 2025-03-10 23:16:46.968622 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-03-10 23:16:47.003393 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:47.016891 | 2025-03-10 23:16:47.017019 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-03-10 23:16:47.420211 | orchestrator -> localhost | changed 2025-03-10 23:16:47.445952 | 2025-03-10 23:16:47.446098 | TASK [add-build-sshkey : Add back temp key] 2025-03-10 23:16:47.797975 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/work/b1e5c5de2ce3410bae8409c63759374d_id_rsa (zuul-build-sshkey) 2025-03-10 23:16:47.798362 | orchestrator -> localhost | ok: Runtime: 0:00:00.016415 2025-03-10 23:16:47.813047 | 2025-03-10 23:16:47.813196 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-03-10 23:16:48.185666 | orchestrator | ok 2025-03-10 23:16:48.195805 | 2025-03-10 23:16:48.195949 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-03-10 23:16:48.233920 | orchestrator | skipping: Conditional result was False 2025-03-10 23:16:48.261415 | 2025-03-10 23:16:48.261561 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-03-10 23:16:48.669544 | orchestrator | ok 2025-03-10 23:16:48.689102 | 2025-03-10 23:16:48.689223 | TASK [validate-host : Define zuul_info_dir fact] 2025-03-10 23:16:48.738792 | orchestrator | ok 2025-03-10 23:16:48.748895 | 2025-03-10 23:16:48.749009 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-03-10 23:16:49.061227 | orchestrator -> localhost | ok 2025-03-10 23:16:49.070635 | 2025-03-10 23:16:49.070757 | TASK [validate-host : Collect information about the host] 2025-03-10 23:16:50.276962 | orchestrator | ok 2025-03-10 23:16:50.294849 | 2025-03-10 23:16:50.294972 | TASK [validate-host : Sanitize hostname] 2025-03-10 23:16:50.372727 | orchestrator | ok 2025-03-10 23:16:50.381222 | 2025-03-10 23:16:50.381336 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-03-10 23:16:50.971392 | orchestrator -> localhost | changed 2025-03-10 23:16:50.987694 | 2025-03-10 23:16:50.987872 | TASK [validate-host : Collect information about zuul worker] 2025-03-10 23:16:51.489975 | orchestrator | ok 2025-03-10 23:16:51.500191 | 2025-03-10 23:16:51.500337 | TASK [validate-host : Write out all zuul information for each host] 2025-03-10 23:16:52.069586 | orchestrator -> localhost | changed 2025-03-10 23:16:52.087733 | 2025-03-10 23:16:52.087933 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-03-10 23:16:52.382918 | orchestrator | ok 2025-03-10 23:16:52.392710 | 2025-03-10 23:16:52.392835 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-03-10 23:17:15.713372 | orchestrator | changed: 2025-03-10 23:17:15.713570 | orchestrator | .d..t...... src/ 2025-03-10 23:17:15.713603 | orchestrator | .d..t...... src/github.com/ 2025-03-10 23:17:15.713627 | orchestrator | .d..t...... src/github.com/osism/ 2025-03-10 23:17:15.713648 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-03-10 23:17:15.713668 | orchestrator | RedHat.yml 2025-03-10 23:17:15.727859 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-03-10 23:17:15.727876 | orchestrator | RedHat.yml 2025-03-10 23:17:15.727927 | orchestrator | = 2.2.0"... 2025-03-10 23:17:26.881106 | orchestrator | 23:17:26.880 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-03-10 23:17:26.932175 | orchestrator | 23:17:26.931 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-03-10 23:17:28.176719 | orchestrator | 23:17:28.176 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-03-10 23:17:29.127094 | orchestrator | 23:17:29.126 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-03-10 23:17:30.566901 | orchestrator | 23:17:30.566 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-03-10 23:17:31.416484 | orchestrator | 23:17:31.416 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-03-10 23:17:32.632173 | orchestrator | 23:17:32.631 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-03-10 23:17:33.474960 | orchestrator | 23:17:33.474 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-03-10 23:17:33.475005 | orchestrator | 23:17:33.474 STDOUT terraform: Providers are signed by their developers. 2025-03-10 23:17:33.475705 | orchestrator | 23:17:33.474 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-03-10 23:17:33.475783 | orchestrator | 23:17:33.475 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-03-10 23:17:33.475803 | orchestrator | 23:17:33.475 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-03-10 23:17:33.475818 | orchestrator | 23:17:33.475 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-03-10 23:17:33.475834 | orchestrator | 23:17:33.475 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-03-10 23:17:33.475848 | orchestrator | 23:17:33.475 STDOUT terraform: you run "tofu init" in the future. 2025-03-10 23:17:33.475863 | orchestrator | 23:17:33.475 STDOUT terraform: OpenTofu has been successfully initialized! 2025-03-10 23:17:33.475882 | orchestrator | 23:17:33.475 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-03-10 23:17:33.475897 | orchestrator | 23:17:33.475 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-03-10 23:17:33.475924 | orchestrator | 23:17:33.475 STDOUT terraform: should now work. 2025-03-10 23:17:33.475949 | orchestrator | 23:17:33.475 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-03-10 23:17:33.475963 | orchestrator | 23:17:33.475 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-03-10 23:17:33.475981 | orchestrator | 23:17:33.475 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-03-10 23:17:33.593094 | orchestrator | 23:17:33.591 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-03-10 23:17:33.749867 | orchestrator | 23:17:33.749 STDOUT terraform: Created and switched to workspace "ci"! 2025-03-10 23:17:33.749959 | orchestrator | 23:17:33.749 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-03-10 23:17:33.750240 | orchestrator | 23:17:33.749 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-03-10 23:17:33.750270 | orchestrator | 23:17:33.750 STDOUT terraform: for this configuration. 2025-03-10 23:17:33.913615 | orchestrator | 23:17:33.910 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-03-10 23:17:33.994741 | orchestrator | 23:17:33.994 STDOUT terraform: ci.auto.tfvars 2025-03-10 23:17:34.138748 | orchestrator | 23:17:34.138 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-03-10 23:17:34.973866 | orchestrator | 23:17:34.973 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-03-10 23:17:35.497338 | orchestrator | 23:17:35.496 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-03-10 23:17:35.674189 | orchestrator | 23:17:35.673 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-03-10 23:17:35.674243 | orchestrator | 23:17:35.674 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-03-10 23:17:35.674268 | orchestrator | 23:17:35.674 STDOUT terraform:  + create 2025-03-10 23:17:35.674335 | orchestrator | 23:17:35.674 STDOUT terraform:  <= read (data resources) 2025-03-10 23:17:35.674373 | orchestrator | 23:17:35.674 STDOUT terraform: OpenTofu will perform the following actions: 2025-03-10 23:17:35.674636 | orchestrator | 23:17:35.674 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-03-10 23:17:35.674677 | orchestrator | 23:17:35.674 STDOUT terraform:  # (config refers to values not yet known) 2025-03-10 23:17:35.674722 | orchestrator | 23:17:35.674 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-03-10 23:17:35.674766 | orchestrator | 23:17:35.674 STDOUT terraform:  + checksum = (known after apply) 2025-03-10 23:17:35.674804 | orchestrator | 23:17:35.674 STDOUT terraform:  + created_at = (known after apply) 2025-03-10 23:17:35.674843 | orchestrator | 23:17:35.674 STDOUT terraform:  + file = (known after apply) 2025-03-10 23:17:35.674880 | orchestrator | 23:17:35.674 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.674918 | orchestrator | 23:17:35.674 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.674954 | orchestrator | 23:17:35.674 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-10 23:17:35.674994 | orchestrator | 23:17:35.674 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-10 23:17:35.675023 | orchestrator | 23:17:35.674 STDOUT terraform:  + most_recent = true 2025-03-10 23:17:35.675066 | orchestrator | 23:17:35.675 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.675095 | orchestrator | 23:17:35.675 STDOUT terraform:  + protected = (known after apply) 2025-03-10 23:17:35.675224 | orchestrator | 23:17:35.675 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.675236 | orchestrator | 23:17:35.675 STDOUT terraform:  + schema = (known after apply) 2025-03-10 23:17:35.675242 | orchestrator | 23:17:35.675 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-10 23:17:35.675250 | orchestrator | 23:17:35.675 STDOUT terraform:  + tags = (known after apply) 2025-03-10 23:17:35.675285 | orchestrator | 23:17:35.675 STDOUT terraform:  + updated_at = (known after apply) 2025-03-10 23:17:35.675294 | orchestrator | 23:17:35.675 STDOUT terraform:  } 2025-03-10 23:17:35.675496 | orchestrator | 23:17:35.675 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-03-10 23:17:35.675526 | orchestrator | 23:17:35.675 STDOUT terraform:  # (config refers to values not yet known) 2025-03-10 23:17:35.675572 | orchestrator | 23:17:35.675 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-03-10 23:17:35.675625 | orchestrator | 23:17:35.675 STDOUT terraform:  + checksum = (known after apply) 2025-03-10 23:17:35.675660 | orchestrator | 23:17:35.675 STDOUT terraform:  + created_at = (known after apply) 2025-03-10 23:17:35.675697 | orchestrator | 23:17:35.675 STDOUT terraform:  + file = (known after apply) 2025-03-10 23:17:35.675736 | orchestrator | 23:17:35.675 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.675774 | orchestrator | 23:17:35.675 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.675812 | orchestrator | 23:17:35.675 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-10 23:17:35.675849 | orchestrator | 23:17:35.675 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-10 23:17:35.675876 | orchestrator | 23:17:35.675 STDOUT terraform:  + most_recent = true 2025-03-10 23:17:35.675918 | orchestrator | 23:17:35.675 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.675953 | orchestrator | 23:17:35.675 STDOUT terraform:  + protected = (known after apply) 2025-03-10 23:17:35.675995 | orchestrator | 23:17:35.675 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.676028 | orchestrator | 23:17:35.675 STDOUT terraform:  + schema = (known after apply) 2025-03-10 23:17:35.676076 | orchestrator | 23:17:35.676 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-10 23:17:35.676102 | orchestrator | 23:17:35.676 STDOUT terraform:  + tags = (known after apply) 2025-03-10 23:17:35.676148 | orchestrator | 23:17:35.676 STDOUT terraform:  + updated_at = (known after apply) 2025-03-10 23:17:35.676156 | orchestrator | 23:17:35.676 STDOUT terraform:  } 2025-03-10 23:17:35.676203 | orchestrator | 23:17:35.676 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-03-10 23:17:35.676247 | orchestrator | 23:17:35.676 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-03-10 23:17:35.676287 | orchestrator | 23:17:35.676 STDOUT terraform:  + content = (known after apply) 2025-03-10 23:17:35.676338 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:17:35.676378 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:17:35.676430 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:17:35.676471 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:17:35.676521 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:17:35.676559 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:17:35.676608 | orchestrator | 23:17:35.676 STDOUT terraform:  + directory_permission = "0777" 2025-03-10 23:17:35.676632 | orchestrator | 23:17:35.676 STDOUT terraform:  + file_permission = "0644" 2025-03-10 23:17:35.676680 | orchestrator | 23:17:35.676 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-03-10 23:17:35.676727 | orchestrator | 23:17:35.676 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.676743 | orchestrator | 23:17:35.676 STDOUT terraform:  } 2025-03-10 23:17:35.676784 | orchestrator | 23:17:35.676 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-03-10 23:17:35.676825 | orchestrator | 23:17:35.676 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-03-10 23:17:35.676877 | orchestrator | 23:17:35.676 STDOUT terraform:  + content = (known after apply) 2025-03-10 23:17:35.676930 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:17:35.676968 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:17:35.677020 | orchestrator | 23:17:35.676 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:17:35.677061 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:17:35.677117 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:17:35.677154 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:17:35.677196 | orchestrator | 23:17:35.677 STDOUT terraform:  + directory_permission = "0777" 2025-03-10 23:17:35.677218 | orchestrator | 23:17:35.677 STDOUT terraform:  + file_permission = "0644" 2025-03-10 23:17:35.677266 | orchestrator | 23:17:35.677 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-03-10 23:17:35.677309 | orchestrator | 23:17:35.677 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.677325 | orchestrator | 23:17:35.677 STDOUT terraform:  } 2025-03-10 23:17:35.677361 | orchestrator | 23:17:35.677 STDOUT terraform:  # local_file.inventory will be created 2025-03-10 23:17:35.677389 | orchestrator | 23:17:35.677 STDOUT terraform:  + resource "local_file" "inventory" { 2025-03-10 23:17:35.677442 | orchestrator | 23:17:35.677 STDOUT terraform:  + content = (known after apply) 2025-03-10 23:17:35.677482 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:17:35.677527 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:17:35.677574 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:17:35.677628 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:17:35.677673 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:17:35.677718 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:17:35.677748 | orchestrator | 23:17:35.677 STDOUT terraform:  + directory_permission = "0777" 2025-03-10 23:17:35.677780 | orchestrator | 23:17:35.677 STDOUT terraform:  + file_permission = "0644" 2025-03-10 23:17:35.677820 | orchestrator | 23:17:35.677 STDOUT terraform:  + filename = "inventory.ci" 2025-03-10 23:17:35.677866 | orchestrator | 23:17:35.677 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.677883 | orchestrator | 23:17:35.677 STDOUT terraform:  } 2025-03-10 23:17:35.677927 | orchestrator | 23:17:35.677 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-03-10 23:17:35.677960 | orchestrator | 23:17:35.677 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-03-10 23:17:35.678000 | orchestrator | 23:17:35.677 STDOUT terraform:  + content = (sensitive value) 2025-03-10 23:17:35.678062 | orchestrator | 23:17:35.677 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-10 23:17:35.678116 | orchestrator | 23:17:35.678 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-10 23:17:35.678154 | orchestrator | 23:17:35.678 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-10 23:17:35.678210 | orchestrator | 23:17:35.678 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-10 23:17:35.678246 | orchestrator | 23:17:35.678 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-10 23:17:35.678298 | orchestrator | 23:17:35.678 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-10 23:17:35.678322 | orchestrator | 23:17:35.678 STDOUT terraform:  + directory_permission = "0700" 2025-03-10 23:17:35.678354 | orchestrator | 23:17:35.678 STDOUT terraform:  + file_permission = "0600" 2025-03-10 23:17:35.678394 | orchestrator | 23:17:35.678 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-03-10 23:17:35.678441 | orchestrator | 23:17:35.678 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.678465 | orchestrator | 23:17:35.678 STDOUT terraform:  } 2025-03-10 23:17:35.678496 | orchestrator | 23:17:35.678 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-03-10 23:17:35.678536 | orchestrator | 23:17:35.678 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-03-10 23:17:35.678563 | orchestrator | 23:17:35.678 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.678589 | orchestrator | 23:17:35.678 STDOUT terraform:  } 2025-03-10 23:17:35.678679 | orchestrator | 23:17:35.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-03-10 23:17:35.678741 | orchestrator | 23:17:35.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-03-10 23:17:35.678789 | orchestrator | 23:17:35.678 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.678807 | orchestrator | 23:17:35.678 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.678849 | orchestrator | 23:17:35.678 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.678889 | orchestrator | 23:17:35.678 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.678928 | orchestrator | 23:17:35.678 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.678978 | orchestrator | 23:17:35.678 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-03-10 23:17:35.679018 | orchestrator | 23:17:35.678 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.679044 | orchestrator | 23:17:35.679 STDOUT terraform:  + size = 80 2025-03-10 23:17:35.679072 | orchestrator | 23:17:35.679 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.679090 | orchestrator | 23:17:35.679 STDOUT terraform:  } 2025-03-10 23:17:35.679171 | orchestrator | 23:17:35.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-03-10 23:17:35.679228 | orchestrator | 23:17:35.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:17:35.679275 | orchestrator | 23:17:35.679 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.679291 | orchestrator | 23:17:35.679 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.679331 | orchestrator | 23:17:35.679 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.679368 | orchestrator | 23:17:35.679 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.679405 | orchestrator | 23:17:35.679 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.679453 | orchestrator | 23:17:35.679 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-03-10 23:17:35.679490 | orchestrator | 23:17:35.679 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.679516 | orchestrator | 23:17:35.679 STDOUT terraform:  + size = 80 2025-03-10 23:17:35.679540 | orchestrator | 23:17:35.679 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.679556 | orchestrator | 23:17:35.679 STDOUT terraform:  } 2025-03-10 23:17:35.679624 | orchestrator | 23:17:35.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-03-10 23:17:35.679688 | orchestrator | 23:17:35.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:17:35.679717 | orchestrator | 23:17:35.679 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.679742 | orchestrator | 23:17:35.679 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.679780 | orchestrator | 23:17:35.679 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.679816 | orchestrator | 23:17:35.679 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.679854 | orchestrator | 23:17:35.679 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.679901 | orchestrator | 23:17:35.679 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-03-10 23:17:35.679941 | orchestrator | 23:17:35.679 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.679964 | orchestrator | 23:17:35.679 STDOUT terraform:  + size = 80 2025-03-10 23:17:35.679989 | orchestrator | 23:17:35.679 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.680005 | orchestrator | 23:17:35.679 STDOUT terraform:  } 2025-03-10 23:17:35.680065 | orchestrator | 23:17:35.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-03-10 23:17:35.680120 | orchestrator | 23:17:35.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:17:35.680158 | orchestrator | 23:17:35.680 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.680183 | orchestrator | 23:17:35.680 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.680221 | orchestrator | 23:17:35.680 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.680258 | orchestrator | 23:17:35.680 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.680295 | orchestrator | 23:17:35.680 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.680343 | orchestrator | 23:17:35.680 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-03-10 23:17:35.680383 | orchestrator | 23:17:35.680 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.680408 | orchestrator | 23:17:35.680 STDOUT terraform:  + size = 80 2025-03-10 23:17:35.680436 | orchestrator | 23:17:35.680 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.680444 | orchestrator | 23:17:35.680 STDOUT terraform:  } 2025-03-10 23:17:35.680503 | orchestrator | 23:17:35.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-03-10 23:17:35.680560 | orchestrator | 23:17:35.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:17:35.680603 | orchestrator | 23:17:35.680 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.680627 | orchestrator | 23:17:35.680 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.680667 | orchestrator | 23:17:35.680 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.680704 | orchestrator | 23:17:35.680 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.680742 | orchestrator | 23:17:35.680 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.680789 | orchestrator | 23:17:35.680 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-03-10 23:17:35.680826 | orchestrator | 23:17:35.680 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.680851 | orchestrator | 23:17:35.680 STDOUT terraform:  + size = 80 2025-03-10 23:17:35.680876 | orchestrator | 23:17:35.680 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.680892 | orchestrator | 23:17:35.680 STDOUT terraform:  } 2025-03-10 23:17:35.680950 | orchestrator | 23:17:35.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-03-10 23:17:35.681007 | orchestrator | 23:17:35.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:17:35.681044 | orchestrator | 23:17:35.681 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.681070 | orchestrator | 23:17:35.681 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.681107 | orchestrator | 23:17:35.681 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.681145 | orchestrator | 23:17:35.681 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.681183 | orchestrator | 23:17:35.681 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.681230 | orchestrator | 23:17:35.681 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-03-10 23:17:35.681267 | orchestrator | 23:17:35.681 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.681292 | orchestrator | 23:17:35.681 STDOUT terraform:  + size = 80 2025-03-10 23:17:35.681318 | orchestrator | 23:17:35.681 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.681334 | orchestrator | 23:17:35.681 STDOUT terraform:  } 2025-03-10 23:17:35.681392 | orchestrator | 23:17:35.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-03-10 23:17:35.681448 | orchestrator | 23:17:35.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-10 23:17:35.681485 | orchestrator | 23:17:35.681 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.681510 | orchestrator | 23:17:35.681 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.681549 | orchestrator | 23:17:35.681 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.681607 | orchestrator | 23:17:35.681 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.681646 | orchestrator | 23:17:35.681 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.681690 | orchestrator | 23:17:35.681 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-03-10 23:17:35.681724 | orchestrator | 23:17:35.681 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.681747 | orchestrator | 23:17:35.681 STDOUT terraform:  + size = 80 2025-03-10 23:17:35.681769 | orchestrator | 23:17:35.681 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.681784 | orchestrator | 23:17:35.681 STDOUT terraform:  } 2025-03-10 23:17:35.681834 | orchestrator | 23:17:35.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-03-10 23:17:35.681881 | orchestrator | 23:17:35.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.681916 | orchestrator | 23:17:35.681 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.681939 | orchestrator | 23:17:35.681 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.681976 | orchestrator | 23:17:35.681 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.682008 | orchestrator | 23:17:35.681 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.682062 | orchestrator | 23:17:35.682 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-03-10 23:17:35.682095 | orchestrator | 23:17:35.682 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.682117 | orchestrator | 23:17:35.682 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.682142 | orchestrator | 23:17:35.682 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.682149 | orchestrator | 23:17:35.682 STDOUT terraform:  } 2025-03-10 23:17:35.682201 | orchestrator | 23:17:35.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-03-10 23:17:35.682248 | orchestrator | 23:17:35.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.682282 | orchestrator | 23:17:35.682 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.682306 | orchestrator | 23:17:35.682 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.682340 | orchestrator | 23:17:35.682 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.682374 | orchestrator | 23:17:35.682 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.682416 | orchestrator | 23:17:35.682 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-03-10 23:17:35.682450 | orchestrator | 23:17:35.682 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.682478 | orchestrator | 23:17:35.682 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.682496 | orchestrator | 23:17:35.682 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.682504 | orchestrator | 23:17:35.682 STDOUT terraform:  } 2025-03-10 23:17:35.682556 | orchestrator | 23:17:35.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-03-10 23:17:35.682612 | orchestrator | 23:17:35.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.682645 | orchestrator | 23:17:35.682 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.682668 | orchestrator | 23:17:35.682 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.682703 | orchestrator | 23:17:35.682 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.682737 | orchestrator | 23:17:35.682 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.682777 | orchestrator | 23:17:35.682 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-03-10 23:17:35.682812 | orchestrator | 23:17:35.682 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.682834 | orchestrator | 23:17:35.682 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.682856 | orchestrator | 23:17:35.682 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.682871 | orchestrator | 23:17:35.682 STDOUT terraform:  } 2025-03-10 23:17:35.682920 | orchestrator | 23:17:35.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-03-10 23:17:35.682968 | orchestrator | 23:17:35.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.683002 | orchestrator | 23:17:35.682 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.683024 | orchestrator | 23:17:35.682 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.683060 | orchestrator | 23:17:35.683 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.683093 | orchestrator | 23:17:35.683 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.683134 | orchestrator | 23:17:35.683 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-03-10 23:17:35.683168 | orchestrator | 23:17:35.683 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.683191 | orchestrator | 23:17:35.683 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.683214 | orchestrator | 23:17:35.683 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.683222 | orchestrator | 23:17:35.683 STDOUT terraform:  } 2025-03-10 23:17:35.683275 | orchestrator | 23:17:35.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-03-10 23:17:35.683323 | orchestrator | 23:17:35.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.683356 | orchestrator | 23:17:35.683 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.683379 | orchestrator | 23:17:35.683 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.683413 | orchestrator | 23:17:35.683 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.683446 | orchestrator | 23:17:35.683 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.683488 | orchestrator | 23:17:35.683 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-03-10 23:17:35.683523 | orchestrator | 23:17:35.683 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.683545 | orchestrator | 23:17:35.683 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.683567 | orchestrator | 23:17:35.683 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.683607 | orchestrator | 23:17:35.683 STDOUT terraform:  } 2025-03-10 23:17:35.683636 | orchestrator | 23:17:35.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-03-10 23:17:35.683684 | orchestrator | 23:17:35.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.683718 | orchestrator | 23:17:35.683 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.683740 | orchestrator | 23:17:35.683 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.683774 | orchestrator | 23:17:35.683 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.683810 | orchestrator | 23:17:35.683 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.683849 | orchestrator | 23:17:35.683 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-03-10 23:17:35.683883 | orchestrator | 23:17:35.683 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.683905 | orchestrator | 23:17:35.683 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.683929 | orchestrator | 23:17:35.683 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.683943 | orchestrator | 23:17:35.683 STDOUT terraform:  } 2025-03-10 23:17:35.683993 | orchestrator | 23:17:35.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-03-10 23:17:35.684039 | orchestrator | 23:17:35.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.684075 | orchestrator | 23:17:35.684 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.684096 | orchestrator | 23:17:35.684 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.684131 | orchestrator | 23:17:35.684 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.684164 | orchestrator | 23:17:35.684 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.684207 | orchestrator | 23:17:35.684 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-03-10 23:17:35.684241 | orchestrator | 23:17:35.684 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.684263 | orchestrator | 23:17:35.684 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.684287 | orchestrator | 23:17:35.684 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.684300 | orchestrator | 23:17:35.684 STDOUT terraform:  } 2025-03-10 23:17:35.684349 | orchestrator | 23:17:35.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-03-10 23:17:35.684397 | orchestrator | 23:17:35.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.684431 | orchestrator | 23:17:35.684 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.684454 | orchestrator | 23:17:35.684 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.684488 | orchestrator | 23:17:35.684 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.684524 | orchestrator | 23:17:35.684 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.684567 | orchestrator | 23:17:35.684 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-03-10 23:17:35.684618 | orchestrator | 23:17:35.684 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.684640 | orchestrator | 23:17:35.684 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.684662 | orchestrator | 23:17:35.684 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.684677 | orchestrator | 23:17:35.684 STDOUT terraform:  } 2025-03-10 23:17:35.684727 | orchestrator | 23:17:35.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-03-10 23:17:35.684774 | orchestrator | 23:17:35.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.684808 | orchestrator | 23:17:35.684 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.684830 | orchestrator | 23:17:35.684 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.684864 | orchestrator | 23:17:35.684 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.684898 | orchestrator | 23:17:35.684 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.684939 | orchestrator | 23:17:35.684 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-03-10 23:17:35.684973 | orchestrator | 23:17:35.684 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.684996 | orchestrator | 23:17:35.684 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.685019 | orchestrator | 23:17:35.684 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.685034 | orchestrator | 23:17:35.685 STDOUT terraform:  } 2025-03-10 23:17:35.685093 | orchestrator | 23:17:35.685 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-03-10 23:17:35.685131 | orchestrator | 23:17:35.685 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.685162 | orchestrator | 23:17:35.685 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.685183 | orchestrator | 23:17:35.685 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.685214 | orchestrator | 23:17:35.685 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.685244 | orchestrator | 23:17:35.685 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.685281 | orchestrator | 23:17:35.685 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-03-10 23:17:35.685312 | orchestrator | 23:17:35.685 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.685332 | orchestrator | 23:17:35.685 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.685353 | orchestrator | 23:17:35.685 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.685366 | orchestrator | 23:17:35.685 STDOUT terraform:  } 2025-03-10 23:17:35.685416 | orchestrator | 23:17:35.685 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-03-10 23:17:35.685456 | orchestrator | 23:17:35.685 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.685488 | orchestrator | 23:17:35.685 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.685509 | orchestrator | 23:17:35.685 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.685540 | orchestrator | 23:17:35.685 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.685573 | orchestrator | 23:17:35.685 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.685616 | orchestrator | 23:17:35.685 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-03-10 23:17:35.685647 | orchestrator | 23:17:35.685 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.685667 | orchestrator | 23:17:35.685 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.685687 | orchestrator | 23:17:35.685 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.685701 | orchestrator | 23:17:35.685 STDOUT terraform:  } 2025-03-10 23:17:35.685746 | orchestrator | 23:17:35.685 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-03-10 23:17:35.685789 | orchestrator | 23:17:35.685 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.685819 | orchestrator | 23:17:35.685 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.685840 | orchestrator | 23:17:35.685 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.685871 | orchestrator | 23:17:35.685 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.685904 | orchestrator | 23:17:35.685 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.685941 | orchestrator | 23:17:35.685 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-03-10 23:17:35.685974 | orchestrator | 23:17:35.685 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.685993 | orchestrator | 23:17:35.685 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.686032 | orchestrator | 23:17:35.685 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.686080 | orchestrator | 23:17:35.686 STDOUT terraform:  } 2025-03-10 23:17:35.686087 | orchestrator | 23:17:35.686 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-03-10 23:17:35.686124 | orchestrator | 23:17:35.686 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.686154 | orchestrator | 23:17:35.686 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.686175 | orchestrator | 23:17:35.686 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.686206 | orchestrator | 23:17:35.686 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.686237 | orchestrator | 23:17:35.686 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.686277 | orchestrator | 23:17:35.686 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-03-10 23:17:35.686308 | orchestrator | 23:17:35.686 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.686328 | orchestrator | 23:17:35.686 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.686350 | orchestrator | 23:17:35.686 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.686357 | orchestrator | 23:17:35.686 STDOUT terraform:  } 2025-03-10 23:17:35.686405 | orchestrator | 23:17:35.686 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-03-10 23:17:35.686448 | orchestrator | 23:17:35.686 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.686479 | orchestrator | 23:17:35.686 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.686499 | orchestrator | 23:17:35.686 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.686531 | orchestrator | 23:17:35.686 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.686563 | orchestrator | 23:17:35.686 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.686608 | orchestrator | 23:17:35.686 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-03-10 23:17:35.686637 | orchestrator | 23:17:35.686 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.686658 | orchestrator | 23:17:35.686 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.686679 | orchestrator | 23:17:35.686 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.686686 | orchestrator | 23:17:35.686 STDOUT terraform:  } 2025-03-10 23:17:35.686734 | orchestrator | 23:17:35.686 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-03-10 23:17:35.686777 | orchestrator | 23:17:35.686 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.686810 | orchestrator | 23:17:35.686 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.686829 | orchestrator | 23:17:35.686 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.686862 | orchestrator | 23:17:35.686 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.686892 | orchestrator | 23:17:35.686 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.686930 | orchestrator | 23:17:35.686 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-03-10 23:17:35.686961 | orchestrator | 23:17:35.686 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.686982 | orchestrator | 23:17:35.686 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.687002 | orchestrator | 23:17:35.686 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.687015 | orchestrator | 23:17:35.686 STDOUT terraform:  } 2025-03-10 23:17:35.687060 | orchestrator | 23:17:35.687 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-03-10 23:17:35.687104 | orchestrator | 23:17:35.687 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.687136 | orchestrator | 23:17:35.687 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.687156 | orchestrator | 23:17:35.687 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.687188 | orchestrator | 23:17:35.687 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.687219 | orchestrator | 23:17:35.687 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.687256 | orchestrator | 23:17:35.687 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-03-10 23:17:35.687287 | orchestrator | 23:17:35.687 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.687308 | orchestrator | 23:17:35.687 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.687328 | orchestrator | 23:17:35.687 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.687342 | orchestrator | 23:17:35.687 STDOUT terraform:  } 2025-03-10 23:17:35.687388 | orchestrator | 23:17:35.687 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-03-10 23:17:35.687430 | orchestrator | 23:17:35.687 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.687461 | orchestrator | 23:17:35.687 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.687482 | orchestrator | 23:17:35.687 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.687514 | orchestrator | 23:17:35.687 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.687544 | orchestrator | 23:17:35.687 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.687594 | orchestrator | 23:17:35.687 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-03-10 23:17:35.687625 | orchestrator | 23:17:35.687 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.687647 | orchestrator | 23:17:35.687 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.687668 | orchestrator | 23:17:35.687 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.687680 | orchestrator | 23:17:35.687 STDOUT terraform:  } 2025-03-10 23:17:35.687728 | orchestrator | 23:17:35.687 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-03-10 23:17:35.687772 | orchestrator | 23:17:35.687 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-10 23:17:35.687800 | orchestrator | 23:17:35.687 STDOUT terraform:  + attachment = (known after apply) 2025-03-10 23:17:35.687820 | orchestrator | 23:17:35.687 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.687850 | orchestrator | 23:17:35.687 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.687879 | orchestrator | 23:17:35.687 STDOUT terraform:  + metadata = (known after apply) 2025-03-10 23:17:35.687916 | orchestrator | 23:17:35.687 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-03-10 23:17:35.687947 | orchestrator | 23:17:35.687 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.687966 | orchestrator | 23:17:35.687 STDOUT terraform:  + size = 20 2025-03-10 23:17:35.687986 | orchestrator | 23:17:35.687 STDOUT terraform:  + volume_type = "ssd" 2025-03-10 23:17:35.687999 | orchestrator | 23:17:35.687 STDOUT terraform:  } 2025-03-10 23:17:35.688041 | orchestrator | 23:17:35.687 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-03-10 23:17:35.688082 | orchestrator | 23:17:35.688 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-03-10 23:17:35.688116 | orchestrator | 23:17:35.688 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:17:35.688149 | orchestrator | 23:17:35.688 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:17:35.688183 | orchestrator | 23:17:35.688 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:17:35.688218 | orchestrator | 23:17:35.688 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.688241 | orchestrator | 23:17:35.688 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.688261 | orchestrator | 23:17:35.688 STDOUT terraform:  + config_drive = true 2025-03-10 23:17:35.688295 | orchestrator | 23:17:35.688 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:17:35.688329 | orchestrator | 23:17:35.688 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:17:35.688358 | orchestrator | 23:17:35.688 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-03-10 23:17:35.688381 | orchestrator | 23:17:35.688 STDOUT terraform:  + force_delete = false 2025-03-10 23:17:35.688416 | orchestrator | 23:17:35.688 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.688451 | orchestrator | 23:17:35.688 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.688484 | orchestrator | 23:17:35.688 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:17:35.688509 | orchestrator | 23:17:35.688 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:17:35.688538 | orchestrator | 23:17:35.688 STDOUT terraform:  + name = "testbed-manager" 2025-03-10 23:17:35.688562 | orchestrator | 23:17:35.688 STDOUT terraform:  + power_state = "active" 2025-03-10 23:17:35.688601 | orchestrator | 23:17:35.688 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.688634 | orchestrator | 23:17:35.688 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:17:35.688657 | orchestrator | 23:17:35.688 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:17:35.688691 | orchestrator | 23:17:35.688 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:17:35.688726 | orchestrator | 23:17:35.688 STDOUT terraform:  + user_data = (known after apply) 2025-03-10 23:17:35.688741 | orchestrator | 23:17:35.688 STDOUT terraform:  + block_device { 2025-03-10 23:17:35.688764 | orchestrator | 23:17:35.688 STDOUT terraform:  + boot_index = 0 2025-03-10 23:17:35.688791 | orchestrator | 23:17:35.688 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:17:35.688819 | orchestrator | 23:17:35.688 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:17:35.688847 | orchestrator | 23:17:35.688 STDOUT terraform:  + multiattach = false 2025-03-10 23:17:35.688876 | orchestrator | 23:17:35.688 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:17:35.688913 | orchestrator | 23:17:35.688 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.688929 | orchestrator | 23:17:35.688 STDOUT terraform:  } 2025-03-10 23:17:35.688935 | orchestrator | 23:17:35.688 STDOUT terraform:  + network { 2025-03-10 23:17:35.688957 | orchestrator | 23:17:35.688 STDOUT terraform:  + access_network = false 2025-03-10 23:17:35.688988 | orchestrator | 23:17:35.688 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:17:35.689017 | orchestrator | 23:17:35.688 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:17:35.689047 | orchestrator | 23:17:35.689 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:17:35.689077 | orchestrator | 23:17:35.689 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.689108 | orchestrator | 23:17:35.689 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:17:35.689139 | orchestrator | 23:17:35.689 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.689153 | orchestrator | 23:17:35.689 STDOUT terraform:  } 2025-03-10 23:17:35.689160 | orchestrator | 23:17:35.689 STDOUT terraform:  } 2025-03-10 23:17:35.689243 | orchestrator | 23:17:35.689 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-03-10 23:17:35.689285 | orchestrator | 23:17:35.689 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:17:35.689320 | orchestrator | 23:17:35.689 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:17:35.689354 | orchestrator | 23:17:35.689 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:17:35.689386 | orchestrator | 23:17:35.689 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:17:35.689420 | orchestrator | 23:17:35.689 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.689443 | orchestrator | 23:17:35.689 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.689464 | orchestrator | 23:17:35.689 STDOUT terraform:  + config_drive = true 2025-03-10 23:17:35.689498 | orchestrator | 23:17:35.689 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:17:35.689531 | orchestrator | 23:17:35.689 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:17:35.689559 | orchestrator | 23:17:35.689 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:17:35.689592 | orchestrator | 23:17:35.689 STDOUT terraform:  + force_delete = false 2025-03-10 23:17:35.689628 | orchestrator | 23:17:35.689 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.689663 | orchestrator | 23:17:35.689 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.689697 | orchestrator | 23:17:35.689 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:17:35.689721 | orchestrator | 23:17:35.689 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:17:35.689751 | orchestrator | 23:17:35.689 STDOUT terraform:  + name = "testbed-node-0" 2025-03-10 23:17:35.689774 | orchestrator | 23:17:35.689 STDOUT terraform:  + power_state = "active" 2025-03-10 23:17:35.689809 | orchestrator | 23:17:35.689 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.689843 | orchestrator | 23:17:35.689 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:17:35.689866 | orchestrator | 23:17:35.689 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:17:35.689900 | orchestrator | 23:17:35.689 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:17:35.689950 | orchestrator | 23:17:35.689 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:17:35.689965 | orchestrator | 23:17:35.689 STDOUT terraform:  + block_device { 2025-03-10 23:17:35.689989 | orchestrator | 23:17:35.689 STDOUT terraform:  + boot_index = 0 2025-03-10 23:17:35.690028 | orchestrator | 23:17:35.689 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:17:35.690054 | orchestrator | 23:17:35.690 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:17:35.690081 | orchestrator | 23:17:35.690 STDOUT terraform:  + multiattach = false 2025-03-10 23:17:35.690110 | orchestrator | 23:17:35.690 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:17:35.690148 | orchestrator | 23:17:35.690 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.690155 | orchestrator | 23:17:35.690 STDOUT terraform:  } 2025-03-10 23:17:35.690172 | orchestrator | 23:17:35.690 STDOUT terraform:  + network { 2025-03-10 23:17:35.690192 | orchestrator | 23:17:35.690 STDOUT terraform:  + access_network = false 2025-03-10 23:17:35.690221 | orchestrator | 23:17:35.690 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:17:35.690252 | orchestrator | 23:17:35.690 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:17:35.690302 | orchestrator | 23:17:35.690 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:17:35.690330 | orchestrator | 23:17:35.690 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.690363 | orchestrator | 23:17:35.690 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:17:35.690391 | orchestrator | 23:17:35.690 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.690402 | orchestrator | 23:17:35.690 STDOUT terraform:  } 2025-03-10 23:17:35.690408 | orchestrator | 23:17:35.690 STDOUT terraform:  } 2025-03-10 23:17:35.690453 | orchestrator | 23:17:35.690 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-03-10 23:17:35.690494 | orchestrator | 23:17:35.690 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:17:35.690529 | orchestrator | 23:17:35.690 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:17:35.690561 | orchestrator | 23:17:35.690 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:17:35.690606 | orchestrator | 23:17:35.690 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:17:35.690636 | orchestrator | 23:17:35.690 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.690659 | orchestrator | 23:17:35.690 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.690679 | orchestrator | 23:17:35.690 STDOUT terraform:  + config_drive = true 2025-03-10 23:17:35.690721 | orchestrator | 23:17:35.690 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:17:35.690749 | orchestrator | 23:17:35.690 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:17:35.690780 | orchestrator | 23:17:35.690 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:17:35.690802 | orchestrator | 23:17:35.690 STDOUT terraform:  + force_delete = false 2025-03-10 23:17:35.690837 | orchestrator | 23:17:35.690 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.690871 | orchestrator | 23:17:35.690 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.690905 | orchestrator | 23:17:35.690 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:17:35.690929 | orchestrator | 23:17:35.690 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:17:35.690961 | orchestrator | 23:17:35.690 STDOUT terraform:  + name = "testbed-node-1" 2025-03-10 23:17:35.690985 | orchestrator | 23:17:35.690 STDOUT terraform:  + power_state = "active" 2025-03-10 23:17:35.691017 | orchestrator | 23:17:35.690 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.691050 | orchestrator | 23:17:35.691 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:17:35.691072 | orchestrator | 23:17:35.691 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:17:35.691107 | orchestrator | 23:17:35.691 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:17:35.691155 | orchestrator | 23:17:35.691 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:17:35.691171 | orchestrator | 23:17:35.691 STDOUT terraform:  + block_device { 2025-03-10 23:17:35.691195 | orchestrator | 23:17:35.691 STDOUT terraform:  + boot_index = 0 2025-03-10 23:17:35.691221 | orchestrator | 23:17:35.691 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:17:35.691251 | orchestrator | 23:17:35.691 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:17:35.691279 | orchestrator | 23:17:35.691 STDOUT terraform:  + multiattach = false 2025-03-10 23:17:35.691307 | orchestrator | 23:17:35.691 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:17:35.691344 | orchestrator | 23:17:35.691 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.691351 | orchestrator | 23:17:35.691 STDOUT terraform:  } 2025-03-10 23:17:35.691368 | orchestrator | 23:17:35.691 STDOUT terraform:  + network { 2025-03-10 23:17:35.691387 | orchestrator | 23:17:35.691 STDOUT terraform:  + access_network = false 2025-03-10 23:17:35.691417 | orchestrator | 23:17:35.691 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:17:35.691447 | orchestrator | 23:17:35.691 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:17:35.691477 | orchestrator | 23:17:35.691 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:17:35.691507 | orchestrator | 23:17:35.691 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.691537 | orchestrator | 23:17:35.691 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:17:35.691567 | orchestrator | 23:17:35.691 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.691574 | orchestrator | 23:17:35.691 STDOUT terraform:  } 2025-03-10 23:17:35.691605 | orchestrator | 23:17:35.691 STDOUT terraform:  } 2025-03-10 23:17:35.691648 | orchestrator | 23:17:35.691 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-03-10 23:17:35.691688 | orchestrator | 23:17:35.691 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:17:35.691722 | orchestrator | 23:17:35.691 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:17:35.691756 | orchestrator | 23:17:35.691 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:17:35.691793 | orchestrator | 23:17:35.691 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:17:35.691826 | orchestrator | 23:17:35.691 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.691848 | orchestrator | 23:17:35.691 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.691868 | orchestrator | 23:17:35.691 STDOUT terraform:  + config_drive = true 2025-03-10 23:17:35.691902 | orchestrator | 23:17:35.691 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:17:35.691935 | orchestrator | 23:17:35.691 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:17:35.691964 | orchestrator | 23:17:35.691 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:17:35.691986 | orchestrator | 23:17:35.691 STDOUT terraform:  + force_delete = false 2025-03-10 23:17:35.692021 | orchestrator | 23:17:35.691 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.692056 | orchestrator | 23:17:35.692 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.692090 | orchestrator | 23:17:35.692 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:17:35.692114 | orchestrator | 23:17:35.692 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:17:35.692144 | orchestrator | 23:17:35.692 STDOUT terraform:  + name = "testbed-node-2" 2025-03-10 23:17:35.692168 | orchestrator | 23:17:35.692 STDOUT terraform:  + power_state = "active" 2025-03-10 23:17:35.692201 | orchestrator | 23:17:35.692 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.692234 | orchestrator | 23:17:35.692 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:17:35.692257 | orchestrator | 23:17:35.692 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:17:35.692292 | orchestrator | 23:17:35.692 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:17:35.692340 | orchestrator | 23:17:35.692 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:17:35.692356 | orchestrator | 23:17:35.692 STDOUT terraform:  + block_device { 2025-03-10 23:17:35.692379 | orchestrator | 23:17:35.692 STDOUT terraform:  + boot_index = 0 2025-03-10 23:17:35.692406 | orchestrator | 23:17:35.692 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:17:35.692434 | orchestrator | 23:17:35.692 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:17:35.692461 | orchestrator | 23:17:35.692 STDOUT terraform:  + multiattach = false 2025-03-10 23:17:35.692490 | orchestrator | 23:17:35.692 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:17:35.692528 | orchestrator | 23:17:35.692 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.692535 | orchestrator | 23:17:35.692 STDOUT terraform:  } 2025-03-10 23:17:35.692551 | orchestrator | 23:17:35.692 STDOUT terraform:  + network { 2025-03-10 23:17:35.692571 | orchestrator | 23:17:35.692 STDOUT terraform:  + access_network = false 2025-03-10 23:17:35.692606 | orchestrator | 23:17:35.692 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:17:35.692637 | orchestrator | 23:17:35.692 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:17:35.692668 | orchestrator | 23:17:35.692 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:17:35.692697 | orchestrator | 23:17:35.692 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.692727 | orchestrator | 23:17:35.692 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:17:35.692757 | orchestrator | 23:17:35.692 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.692770 | orchestrator | 23:17:35.692 STDOUT terraform:  } 2025-03-10 23:17:35.692783 | orchestrator | 23:17:35.692 STDOUT terraform:  } 2025-03-10 23:17:35.692826 | orchestrator | 23:17:35.692 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-03-10 23:17:35.692866 | orchestrator | 23:17:35.692 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:17:35.692901 | orchestrator | 23:17:35.692 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:17:35.692935 | orchestrator | 23:17:35.692 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:17:35.692968 | orchestrator | 23:17:35.692 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:17:35.693002 | orchestrator | 23:17:35.692 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.693028 | orchestrator | 23:17:35.692 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.693045 | orchestrator | 23:17:35.693 STDOUT terraform:  + config_drive = true 2025-03-10 23:17:35.693079 | orchestrator | 23:17:35.693 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:17:35.693112 | orchestrator | 23:17:35.693 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:17:35.693142 | orchestrator | 23:17:35.693 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:17:35.693164 | orchestrator | 23:17:35.693 STDOUT terraform:  + force_delete = false 2025-03-10 23:17:35.693200 | orchestrator | 23:17:35.693 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.693235 | orchestrator | 23:17:35.693 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.693267 | orchestrator | 23:17:35.693 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:17:35.693291 | orchestrator | 23:17:35.693 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:17:35.693321 | orchestrator | 23:17:35.693 STDOUT terraform:  + name = "testbed-node-3" 2025-03-10 23:17:35.693344 | orchestrator | 23:17:35.693 STDOUT terraform:  + power_state = "active" 2025-03-10 23:17:35.693379 | orchestrator | 23:17:35.693 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.693414 | orchestrator | 23:17:35.693 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:17:35.693435 | orchestrator | 23:17:35.693 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:17:35.693469 | orchestrator | 23:17:35.693 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:17:35.693517 | orchestrator | 23:17:35.693 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:17:35.693533 | orchestrator | 23:17:35.693 STDOUT terraform:  + block_device { 2025-03-10 23:17:35.693556 | orchestrator | 23:17:35.693 STDOUT terraform:  + boot_index = 0 2025-03-10 23:17:35.693598 | orchestrator | 23:17:35.693 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:17:35.693626 | orchestrator | 23:17:35.693 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:17:35.693654 | orchestrator | 23:17:35.693 STDOUT terraform:  + multiattach = false 2025-03-10 23:17:35.693683 | orchestrator | 23:17:35.693 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:17:35.693721 | orchestrator | 23:17:35.693 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.693733 | orchestrator | 23:17:35.693 STDOUT terraform:  } 2025-03-10 23:17:35.693747 | orchestrator | 23:17:35.693 STDOUT terraform:  + network { 2025-03-10 23:17:35.693768 | orchestrator | 23:17:35.693 STDOUT terraform:  + access_network = false 2025-03-10 23:17:35.693798 | orchestrator | 23:17:35.693 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:17:35.693826 | orchestrator | 23:17:35.693 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:17:35.693857 | orchestrator | 23:17:35.693 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:17:35.693888 | orchestrator | 23:17:35.693 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.693918 | orchestrator | 23:17:35.693 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:17:35.693948 | orchestrator | 23:17:35.693 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.693961 | orchestrator | 23:17:35.693 STDOUT terraform:  } 2025-03-10 23:17:35.693973 | orchestrator | 23:17:35.693 STDOUT terraform:  } 2025-03-10 23:17:35.694035 | orchestrator | 23:17:35.693 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-03-10 23:17:35.694066 | orchestrator | 23:17:35.694 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:17:35.694101 | orchestrator | 23:17:35.694 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:17:35.697058 | orchestrator | 23:17:35.694 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:17:35.697075 | orchestrator | 23:17:35.694 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:17:35.697080 | orchestrator | 23:17:35.694 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.697085 | orchestrator | 23:17:35.694 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.697090 | orchestrator | 23:17:35.694 STDOUT terraform:  + config_drive = true 2025-03-10 23:17:35.697095 | orchestrator | 23:17:35.694 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:17:35.697100 | orchestrator | 23:17:35.694 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:17:35.697105 | orchestrator | 23:17:35.694 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:17:35.697110 | orchestrator | 23:17:35.694 STDOUT terraform:  + force_delete = false 2025-03-10 23:17:35.697115 | orchestrator | 23:17:35.694 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697120 | orchestrator | 23:17:35.694 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.697125 | orchestrator | 23:17:35.694 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:17:35.697129 | orchestrator | 23:17:35.694 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:17:35.697134 | orchestrator | 23:17:35.694 STDOUT terraform:  + name = "testbed-node-4" 2025-03-10 23:17:35.697139 | orchestrator | 23:17:35.694 STDOUT terraform:  + power_state = "active" 2025-03-10 23:17:35.697144 | orchestrator | 23:17:35.694 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697149 | orchestrator | 23:17:35.694 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:17:35.697153 | orchestrator | 23:17:35.694 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:17:35.697158 | orchestrator | 23:17:35.694 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:17:35.697163 | orchestrator | 23:17:35.694 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:17:35.697168 | orchestrator | 23:17:35.694 STDOUT terraform:  + block_device { 2025-03-10 23:17:35.697173 | orchestrator | 23:17:35.694 STDOUT terraform:  + boot_index = 0 2025-03-10 23:17:35.697183 | orchestrator | 23:17:35.694 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:17:35.697189 | orchestrator | 23:17:35.694 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:17:35.697193 | orchestrator | 23:17:35.694 STDOUT terraform:  + multiattach = false 2025-03-10 23:17:35.697198 | orchestrator | 23:17:35.694 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:17:35.697203 | orchestrator | 23:17:35.694 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.697208 | orchestrator | 23:17:35.694 STDOUT terraform:  } 2025-03-10 23:17:35.697213 | orchestrator | 23:17:35.694 STDOUT terraform:  + network { 2025-03-10 23:17:35.697218 | orchestrator | 23:17:35.694 STDOUT terraform:  + access_network = false 2025-03-10 23:17:35.697230 | orchestrator | 23:17:35.694 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:17:35.697235 | orchestrator | 23:17:35.694 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:17:35.697240 | orchestrator | 23:17:35.694 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:17:35.697245 | orchestrator | 23:17:35.694 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.697249 | orchestrator | 23:17:35.694 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:17:35.697254 | orchestrator | 23:17:35.694 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.697259 | orchestrator | 23:17:35.695 STDOUT terraform:  } 2025-03-10 23:17:35.697264 | orchestrator | 23:17:35.695 STDOUT terraform:  } 2025-03-10 23:17:35.697273 | orchestrator | 23:17:35.695 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-03-10 23:17:35.697278 | orchestrator | 23:17:35.695 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-10 23:17:35.697283 | orchestrator | 23:17:35.695 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-10 23:17:35.697288 | orchestrator | 23:17:35.695 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-10 23:17:35.697293 | orchestrator | 23:17:35.695 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-10 23:17:35.697298 | orchestrator | 23:17:35.695 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.697303 | orchestrator | 23:17:35.695 STDOUT terraform:  + availability_zone = "nova" 2025-03-10 23:17:35.697308 | orchestrator | 23:17:35.695 STDOUT terraform:  + config_drive = true 2025-03-10 23:17:35.697313 | orchestrator | 23:17:35.695 STDOUT terraform:  + created = (known after apply) 2025-03-10 23:17:35.697318 | orchestrator | 23:17:35.695 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-10 23:17:35.697324 | orchestrator | 23:17:35.695 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-10 23:17:35.697329 | orchestrator | 23:17:35.695 STDOUT terraform:  + force_delete = false 2025-03-10 23:17:35.697334 | orchestrator | 23:17:35.695 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697339 | orchestrator | 23:17:35.695 STDOUT terraform:  + image_id = (known after apply) 2025-03-10 23:17:35.697346 | orchestrator | 23:17:35.695 STDOUT terraform:  + image_name = (known after apply) 2025-03-10 23:17:35.697351 | orchestrator | 23:17:35.695 STDOUT terraform:  + key_pair = "testbed" 2025-03-10 23:17:35.697356 | orchestrator | 23:17:35.695 STDOUT terraform:  + name = "testbed-node-5" 2025-03-10 23:17:35.697361 | orchestrator | 23:17:35.695 STDOUT terraform:  + power_state = "active" 2025-03-10 23:17:35.697365 | orchestrator | 23:17:35.695 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697370 | orchestrator | 23:17:35.695 STDOUT terraform:  + security_groups = (known after apply) 2025-03-10 23:17:35.697375 | orchestrator | 23:17:35.695 STDOUT terraform:  + stop_before_destroy = false 2025-03-10 23:17:35.697380 | orchestrator | 23:17:35.695 STDOUT terraform:  + updated = (known after apply) 2025-03-10 23:17:35.697385 | orchestrator | 23:17:35.695 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-10 23:17:35.697390 | orchestrator | 23:17:35.695 STDOUT terraform:  + block_device { 2025-03-10 23:17:35.697394 | orchestrator | 23:17:35.695 STDOUT terraform:  + boot_index = 0 2025-03-10 23:17:35.697399 | orchestrator | 23:17:35.695 STDOUT terraform:  + delete_on_termination = false 2025-03-10 23:17:35.697405 | orchestrator | 23:17:35.695 STDOUT terraform:  + destination_type = "volume" 2025-03-10 23:17:35.697410 | orchestrator | 23:17:35.695 STDOUT terraform:  + multiattach = false 2025-03-10 23:17:35.697415 | orchestrator | 23:17:35.695 STDOUT terraform:  + source_type = "volume" 2025-03-10 23:17:35.697419 | orchestrator | 23:17:35.695 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.697424 | orchestrator | 23:17:35.695 STDOUT terraform:  } 2025-03-10 23:17:35.697429 | orchestrator | 23:17:35.695 STDOUT terraform:  + network { 2025-03-10 23:17:35.697434 | orchestrator | 23:17:35.695 STDOUT terraform:  + access_network = false 2025-03-10 23:17:35.697438 | orchestrator | 23:17:35.695 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-10 23:17:35.697443 | orchestrator | 23:17:35.695 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-10 23:17:35.697448 | orchestrator | 23:17:35.695 STDOUT terraform:  + mac = (known after apply) 2025-03-10 23:17:35.697453 | orchestrator | 23:17:35.695 STDOUT terraform:  + name = (known after apply) 2025-03-10 23:17:35.697458 | orchestrator | 23:17:35.695 STDOUT terraform:  + port = (known after apply) 2025-03-10 23:17:35.697465 | orchestrator | 23:17:35.696 STDOUT terraform:  + uuid = (known after apply) 2025-03-10 23:17:35.697470 | orchestrator | 23:17:35.696 STDOUT terraform:  } 2025-03-10 23:17:35.697475 | orchestrator | 23:17:35.696 STDOUT terraform:  } 2025-03-10 23:17:35.697482 | orchestrator | 23:17:35.696 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-03-10 23:17:35.697487 | orchestrator | 23:17:35.696 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-03-10 23:17:35.697492 | orchestrator | 23:17:35.696 STDOUT terraform:  + fingerprint = (known after apply) 2025-03-10 23:17:35.697496 | orchestrator | 23:17:35.696 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697504 | orchestrator | 23:17:35.696 STDOUT terraform:  + name = "testbed" 2025-03-10 23:17:35.697509 | orchestrator | 23:17:35.696 STDOUT terraform:  + private_key = (sensitive value) 2025-03-10 23:17:35.697514 | orchestrator | 23:17:35.696 STDOUT terraform:  + public_key = (known after apply) 2025-03-10 23:17:35.697518 | orchestrator | 23:17:35.696 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697523 | orchestrator | 23:17:35.696 STDOUT terraform:  + user_id = (known after apply) 2025-03-10 23:17:35.697528 | orchestrator | 23:17:35.696 STDOUT terraform:  } 2025-03-10 23:17:35.697533 | orchestrator | 23:17:35.696 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-03-10 23:17:35.697538 | orchestrator | 23:17:35.696 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697543 | orchestrator | 23:17:35.696 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.697548 | orchestrator | 23:17:35.696 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697553 | orchestrator | 23:17:35.696 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.697558 | orchestrator | 23:17:35.696 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697562 | orchestrator | 23:17:35.696 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.697567 | orchestrator | 23:17:35.696 STDOUT terraform:  } 2025-03-10 23:17:35.697572 | orchestrator | 23:17:35.696 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-03-10 23:17:35.697600 | orchestrator | 23:17:35.696 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697606 | orchestrator | 23:17:35.696 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.697611 | orchestrator | 23:17:35.696 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697616 | orchestrator | 23:17:35.696 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.697621 | orchestrator | 23:17:35.696 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697628 | orchestrator | 23:17:35.696 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.697633 | orchestrator | 23:17:35.696 STDOUT terraform:  } 2025-03-10 23:17:35.697638 | orchestrator | 23:17:35.696 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-03-10 23:17:35.697643 | orchestrator | 23:17:35.696 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697648 | orchestrator | 23:17:35.696 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.697653 | orchestrator | 23:17:35.696 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697657 | orchestrator | 23:17:35.696 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.697662 | orchestrator | 23:17:35.696 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697667 | orchestrator | 23:17:35.696 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.697674 | orchestrator | 23:17:35.696 STDOUT terraform:  } 2025-03-10 23:17:35.697679 | orchestrator | 23:17:35.696 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-03-10 23:17:35.697687 | orchestrator | 23:17:35.697 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697741 | orchestrator | 23:17:35.697 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.697747 | orchestrator | 23:17:35.697 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697752 | orchestrator | 23:17:35.697 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.697757 | orchestrator | 23:17:35.697 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697762 | orchestrator | 23:17:35.697 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.697767 | orchestrator | 23:17:35.697 STDOUT terraform:  } 2025-03-10 23:17:35.697772 | orchestrator | 23:17:35.697 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-03-10 23:17:35.697776 | orchestrator | 23:17:35.697 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697781 | orchestrator | 23:17:35.697 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.697786 | orchestrator | 23:17:35.697 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697791 | orchestrator | 23:17:35.697 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.697796 | orchestrator | 23:17:35.697 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697801 | orchestrator | 23:17:35.697 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.697806 | orchestrator | 23:17:35.697 STDOUT terraform:  } 2025-03-10 23:17:35.697811 | orchestrator | 23:17:35.697 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-03-10 23:17:35.697816 | orchestrator | 23:17:35.697 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697821 | orchestrator | 23:17:35.697 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.697825 | orchestrator | 23:17:35.697 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697830 | orchestrator | 23:17:35.697 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.697835 | orchestrator | 23:17:35.697 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697840 | orchestrator | 23:17:35.697 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.697845 | orchestrator | 23:17:35.697 STDOUT terraform:  } 2025-03-10 23:17:35.697850 | orchestrator | 23:17:35.697 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-03-10 23:17:35.697859 | orchestrator | 23:17:35.697 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697864 | orchestrator | 23:17:35.697 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.697869 | orchestrator | 23:17:35.697 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.697873 | orchestrator | 23:17:35.697 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.697881 | orchestrator | 23:17:35.697 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.697886 | orchestrator | 23:17:35.697 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.697892 | orchestrator | 23:17:35.697 STDOUT terraform:  } 2025-03-10 23:17:35.697916 | orchestrator | 23:17:35.697 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-03-10 23:17:35.697958 | orchestrator | 23:17:35.697 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.697985 | orchestrator | 23:17:35.697 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.698023 | orchestrator | 23:17:35.697 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.698048 | orchestrator | 23:17:35.698 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.698078 | orchestrator | 23:17:35.698 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.698103 | orchestrator | 23:17:35.698 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.698109 | orchestrator | 23:17:35.698 STDOUT terraform:  } 2025-03-10 23:17:35.698160 | orchestrator | 23:17:35.698 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-03-10 23:17:35.698207 | orchestrator | 23:17:35.698 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.698234 | orchestrator | 23:17:35.698 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.698262 | orchestrator | 23:17:35.698 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.698289 | orchestrator | 23:17:35.698 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.698316 | orchestrator | 23:17:35.698 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.698342 | orchestrator | 23:17:35.698 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.698357 | orchestrator | 23:17:35.698 STDOUT terraform:  } 2025-03-10 23:17:35.698405 | orchestrator | 23:17:35.698 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-03-10 23:17:35.698452 | orchestrator | 23:17:35.698 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.698481 | orchestrator | 23:17:35.698 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.698509 | orchestrator | 23:17:35.698 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.698536 | orchestrator | 23:17:35.698 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.698563 | orchestrator | 23:17:35.698 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.698597 | orchestrator | 23:17:35.698 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.698604 | orchestrator | 23:17:35.698 STDOUT terraform:  } 2025-03-10 23:17:35.698656 | orchestrator | 23:17:35.698 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-03-10 23:17:35.698703 | orchestrator | 23:17:35.698 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.698732 | orchestrator | 23:17:35.698 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.698759 | orchestrator | 23:17:35.698 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.698786 | orchestrator | 23:17:35.698 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.698812 | orchestrator | 23:17:35.698 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.698839 | orchestrator | 23:17:35.698 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.698854 | orchestrator | 23:17:35.698 STDOUT terraform:  } 2025-03-10 23:17:35.698901 | orchestrator | 23:17:35.698 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-03-10 23:17:35.698947 | orchestrator | 23:17:35.698 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.698974 | orchestrator | 23:17:35.698 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.699002 | orchestrator | 23:17:35.698 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.699028 | orchestrator | 23:17:35.698 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.699055 | orchestrator | 23:17:35.699 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.699082 | orchestrator | 23:17:35.699 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.699088 | orchestrator | 23:17:35.699 STDOUT terraform:  } 2025-03-10 23:17:35.699141 | orchestrator | 23:17:35.699 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-03-10 23:17:35.699187 | orchestrator | 23:17:35.699 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.699214 | orchestrator | 23:17:35.699 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.699242 | orchestrator | 23:17:35.699 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.699269 | orchestrator | 23:17:35.699 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.699297 | orchestrator | 23:17:35.699 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.699324 | orchestrator | 23:17:35.699 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.699330 | orchestrator | 23:17:35.699 STDOUT terraform:  } 2025-03-10 23:17:35.699381 | orchestrator | 23:17:35.699 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-03-10 23:17:35.699428 | orchestrator | 23:17:35.699 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.699455 | orchestrator | 23:17:35.699 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.699486 | orchestrator | 23:17:35.699 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.699509 | orchestrator | 23:17:35.699 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.699537 | orchestrator | 23:17:35.699 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.699564 | orchestrator | 23:17:35.699 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.699574 | orchestrator | 23:17:35.699 STDOUT terraform:  } 2025-03-10 23:17:35.699629 | orchestrator | 23:17:35.699 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-03-10 23:17:35.699675 | orchestrator | 23:17:35.699 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.699702 | orchestrator | 23:17:35.699 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.699731 | orchestrator | 23:17:35.699 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.699758 | orchestrator | 23:17:35.699 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.699785 | orchestrator | 23:17:35.699 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.699812 | orchestrator | 23:17:35.699 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.699821 | orchestrator | 23:17:35.699 STDOUT terraform:  } 2025-03-10 23:17:35.699869 | orchestrator | 23:17:35.699 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-03-10 23:17:35.699916 | orchestrator | 23:17:35.699 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.699943 | orchestrator | 23:17:35.699 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.699970 | orchestrator | 23:17:35.699 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.699998 | orchestrator | 23:17:35.699 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.700026 | orchestrator | 23:17:35.699 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.700053 | orchestrator | 23:17:35.700 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.700068 | orchestrator | 23:17:35.700 STDOUT terraform:  } 2025-03-10 23:17:35.700116 | orchestrator | 23:17:35.700 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-03-10 23:17:35.700164 | orchestrator | 23:17:35.700 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.700191 | orchestrator | 23:17:35.700 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.700220 | orchestrator | 23:17:35.700 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.700247 | orchestrator | 23:17:35.700 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.700275 | orchestrator | 23:17:35.700 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.700302 | orchestrator | 23:17:35.700 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.700309 | orchestrator | 23:17:35.700 STDOUT terraform:  } 2025-03-10 23:17:35.700363 | orchestrator | 23:17:35.700 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-03-10 23:17:35.700409 | orchestrator | 23:17:35.700 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-10 23:17:35.700437 | orchestrator | 23:17:35.700 STDOUT terraform:  + device = (known after apply) 2025-03-10 23:17:35.700464 | orchestrator | 23:17:35.700 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.700491 | orchestrator | 23:17:35.700 STDOUT terraform:  + instance_id = (known after apply) 2025-03-10 23:17:35.700519 | orchestrator | 23:17:35.700 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.700547 | orchestrator | 23:17:35.700 STDOUT terraform:  + volume_id = (known after apply) 2025-03-10 23:17:35.700554 | orchestrator | 23:17:35.700 STDOUT terraform:  } 2025-03-10 23:17:35.700629 | orchestrator | 23:17:35.700 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-03-10 23:17:35.700684 | orchestrator | 23:17:35.700 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-03-10 23:17:35.700711 | orchestrator | 23:17:35.700 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-10 23:17:35.700738 | orchestrator | 23:17:35.700 STDOUT terraform:  + floating_ip = (known after apply) 2025-03-10 23:17:35.700765 | orchestrator | 23:17:35.700 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.700805 | orchestrator | 23:17:35.700 STDOUT terraform:  + port_id = (known after apply) 2025-03-10 23:17:35.700824 | orchestrator | 23:17:35.700 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.700872 | orchestrator | 23:17:35.700 STDOUT terraform:  } 2025-03-10 23:17:35.700879 | orchestrator | 23:17:35.700 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-03-10 23:17:35.700919 | orchestrator | 23:17:35.700 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-03-10 23:17:35.700943 | orchestrator | 23:17:35.700 STDOUT terraform:  + address = (known after apply) 2025-03-10 23:17:35.700967 | orchestrator | 23:17:35.700 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.700991 | orchestrator | 23:17:35.700 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-10 23:17:35.701015 | orchestrator | 23:17:35.700 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.701039 | orchestrator | 23:17:35.701 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-10 23:17:35.701063 | orchestrator | 23:17:35.701 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.701079 | orchestrator | 23:17:35.701 STDOUT terraform:  + pool = "public" 2025-03-10 23:17:35.701103 | orchestrator | 23:17:35.701 STDOUT terraform:  + port_id = (known after apply) 2025-03-10 23:17:35.701127 | orchestrator | 23:17:35.701 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.701151 | orchestrator | 23:17:35.701 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.701176 | orchestrator | 23:17:35.701 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.701183 | orchestrator | 23:17:35.701 STDOUT terraform:  } 2025-03-10 23:17:35.701227 | orchestrator | 23:17:35.701 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-03-10 23:17:35.701270 | orchestrator | 23:17:35.701 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-03-10 23:17:35.701304 | orchestrator | 23:17:35.701 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.701339 | orchestrator | 23:17:35.701 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.701360 | orchestrator | 23:17:35.701 STDOUT terraform:  + availability_zone_hints = [ 2025-03-10 23:17:35.701367 | orchestrator | 23:17:35.701 STDOUT terraform:  + "nova", 2025-03-10 23:17:35.701382 | orchestrator | 23:17:35.701 STDOUT terraform:  ] 2025-03-10 23:17:35.701417 | orchestrator | 23:17:35.701 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-10 23:17:35.701453 | orchestrator | 23:17:35.701 STDOUT terraform:  + external = (known after apply) 2025-03-10 23:17:35.701489 | orchestrator | 23:17:35.701 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.701524 | orchestrator | 23:17:35.701 STDOUT terraform:  + mtu = (known after apply) 2025-03-10 23:17:35.701563 | orchestrator | 23:17:35.701 STDOUT terraform:  + name = "net-testbed-management" 2025-03-10 23:17:35.701732 | orchestrator | 23:17:35.701 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.701806 | orchestrator | 23:17:35.701 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.701823 | orchestrator | 23:17:35.701 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.701837 | orchestrator | 23:17:35.701 STDOUT terraform:  + shared = (known after apply) 2025-03-10 23:17:35.701854 | orchestrator | 23:17:35.701 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.701884 | orchestrator | 23:17:35.701 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-03-10 23:17:35.701898 | orchestrator | 23:17:35.701 STDOUT terraform:  + segments (known after apply) 2025-03-10 23:17:35.701912 | orchestrator | 23:17:35.701 STDOUT terraform:  } 2025-03-10 23:17:35.701925 | orchestrator | 23:17:35.701 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-03-10 23:17:35.701942 | orchestrator | 23:17:35.701 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-03-10 23:17:35.701983 | orchestrator | 23:17:35.701 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.701997 | orchestrator | 23:17:35.701 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:17:35.702061 | orchestrator | 23:17:35.701 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:17:35.702080 | orchestrator | 23:17:35.701 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.702097 | orchestrator | 23:17:35.702 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:17:35.702138 | orchestrator | 23:17:35.702 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:17:35.702154 | orchestrator | 23:17:35.702 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:17:35.702207 | orchestrator | 23:17:35.702 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.702224 | orchestrator | 23:17:35.702 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.702270 | orchestrator | 23:17:35.702 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:17:35.702299 | orchestrator | 23:17:35.702 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.702329 | orchestrator | 23:17:35.702 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.702345 | orchestrator | 23:17:35.702 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.702361 | orchestrator | 23:17:35.702 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.702408 | orchestrator | 23:17:35.702 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:17:35.702425 | orchestrator | 23:17:35.702 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.702440 | orchestrator | 23:17:35.702 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.702456 | orchestrator | 23:17:35.702 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:17:35.702472 | orchestrator | 23:17:35.702 STDOUT terraform:  } 2025-03-10 23:17:35.702487 | orchestrator | 23:17:35.702 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.702511 | orchestrator | 23:17:35.702 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:17:35.702527 | orchestrator | 23:17:35.702 STDOUT terraform:  } 2025-03-10 23:17:35.702542 | orchestrator | 23:17:35.702 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:17:35.702558 | orchestrator | 23:17:35.702 STDOUT terraform:  + fixed_ip { 2025-03-10 23:17:35.702574 | orchestrator | 23:17:35.702 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-03-10 23:17:35.702614 | orchestrator | 23:17:35.702 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.702628 | orchestrator | 23:17:35.702 STDOUT terraform:  } 2025-03-10 23:17:35.702643 | orchestrator | 23:17:35.702 STDOUT terraform:  } 2025-03-10 23:17:35.702670 | orchestrator | 23:17:35.702 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-03-10 23:17:35.702727 | orchestrator | 23:17:35.702 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:17:35.702744 | orchestrator | 23:17:35.702 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.702792 | orchestrator | 23:17:35.702 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:17:35.702808 | orchestrator | 23:17:35.702 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:17:35.702859 | orchestrator | 23:17:35.702 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.702875 | orchestrator | 23:17:35.702 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:17:35.702926 | orchestrator | 23:17:35.702 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:17:35.702942 | orchestrator | 23:17:35.702 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:17:35.702992 | orchestrator | 23:17:35.702 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.703009 | orchestrator | 23:17:35.702 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.703060 | orchestrator | 23:17:35.703 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:17:35.703076 | orchestrator | 23:17:35.703 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.703126 | orchestrator | 23:17:35.703 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.703143 | orchestrator | 23:17:35.703 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.703183 | orchestrator | 23:17:35.703 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.703199 | orchestrator | 23:17:35.703 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:17:35.703252 | orchestrator | 23:17:35.703 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.703290 | orchestrator | 23:17:35.703 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.703307 | orchestrator | 23:17:35.703 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:17:35.703331 | orchestrator | 23:17:35.703 STDOUT terraform:  } 2025-03-10 23:17:35.703344 | orchestrator | 23:17:35.703 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.703360 | orchestrator | 23:17:35.703 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:17:35.703406 | orchestrator | 23:17:35.703 STDOUT terraform:  } 2025-03-10 23:17:35.703419 | orchestrator | 23:17:35.703 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.703435 | orchestrator | 23:17:35.703 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:17:35.703449 | orchestrator | 23:17:35.703 STDOUT terraform:  } 2025-03-10 23:17:35.703462 | orchestrator | 23:17:35.703 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.703475 | orchestrator | 23:17:35.703 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:17:35.703490 | orchestrator | 23:17:35.703 STDOUT terraform:  } 2025-03-10 23:17:35.703536 | orchestrator | 23:17:35.703 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:17:35.703550 | orchestrator | 23:17:35.703 STDOUT terraform:  + fixed_ip { 2025-03-10 23:17:35.703563 | orchestrator | 23:17:35.703 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-03-10 23:17:35.703595 | orchestrator | 23:17:35.703 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.703643 | orchestrator | 23:17:35.703 STDOUT terraform:  } 2025-03-10 23:17:35.703657 | orchestrator | 23:17:35.703 STDOUT terraform:  } 2025-03-10 23:17:35.703670 | orchestrator | 23:17:35.703 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-03-10 23:17:35.703688 | orchestrator | 23:17:35.703 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:17:35.703701 | orchestrator | 23:17:35.703 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.703717 | orchestrator | 23:17:35.703 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:17:35.703732 | orchestrator | 23:17:35.703 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:17:35.703777 | orchestrator | 23:17:35.703 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.703794 | orchestrator | 23:17:35.703 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:17:35.703835 | orchestrator | 23:17:35.703 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:17:35.703858 | orchestrator | 23:17:35.703 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:17:35.703900 | orchestrator | 23:17:35.703 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.703916 | orchestrator | 23:17:35.703 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.703968 | orchestrator | 23:17:35.703 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:17:35.703985 | orchestrator | 23:17:35.703 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.704034 | orchestrator | 23:17:35.703 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.704051 | orchestrator | 23:17:35.704 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.704101 | orchestrator | 23:17:35.704 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.704117 | orchestrator | 23:17:35.704 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:17:35.704166 | orchestrator | 23:17:35.704 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.704204 | orchestrator | 23:17:35.704 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.704221 | orchestrator | 23:17:35.704 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:17:35.704267 | orchestrator | 23:17:35.704 STDOUT terraform:  } 2025-03-10 23:17:35.704281 | orchestrator | 23:17:35.704 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.704297 | orchestrator | 23:17:35.704 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:17:35.704310 | orchestrator | 23:17:35.704 STDOUT terraform:  } 2025-03-10 23:17:35.704323 | orchestrator | 23:17:35.704 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.704336 | orchestrator | 23:17:35.704 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:17:35.704351 | orchestrator | 23:17:35.704 STDOUT terraform:  } 2025-03-10 23:17:35.704397 | orchestrator | 23:17:35.704 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.704411 | orchestrator | 23:17:35.704 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:17:35.704424 | orchestrator | 23:17:35.704 STDOUT terraform:  } 2025-03-10 23:17:35.704439 | orchestrator | 23:17:35.704 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:17:35.704452 | orchestrator | 23:17:35.704 STDOUT terraform:  + fixed_ip { 2025-03-10 23:17:35.704465 | orchestrator | 23:17:35.704 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-03-10 23:17:35.704478 | orchestrator | 23:17:35.704 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.704490 | orchestrator | 23:17:35.704 STDOUT terraform:  } 2025-03-10 23:17:35.704506 | orchestrator | 23:17:35.704 STDOUT terraform:  } 2025-03-10 23:17:35.704544 | orchestrator | 23:17:35.704 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-03-10 23:17:35.704561 | orchestrator | 23:17:35.704 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:17:35.704608 | orchestrator | 23:17:35.704 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.704625 | orchestrator | 23:17:35.704 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:17:35.704641 | orchestrator | 23:17:35.704 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:17:35.704743 | orchestrator | 23:17:35.704 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.704762 | orchestrator | 23:17:35.704 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:17:35.704770 | orchestrator | 23:17:35.704 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:17:35.704790 | orchestrator | 23:17:35.704 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:17:35.704823 | orchestrator | 23:17:35.704 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.704859 | orchestrator | 23:17:35.704 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.704893 | orchestrator | 23:17:35.704 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:17:35.704928 | orchestrator | 23:17:35.704 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.704962 | orchestrator | 23:17:35.704 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.704996 | orchestrator | 23:17:35.704 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.705034 | orchestrator | 23:17:35.704 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.705066 | orchestrator | 23:17:35.705 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:17:35.705102 | orchestrator | 23:17:35.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.705120 | orchestrator | 23:17:35.705 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.705146 | orchestrator | 23:17:35.705 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:17:35.705153 | orchestrator | 23:17:35.705 STDOUT terraform:  } 2025-03-10 23:17:35.705171 | orchestrator | 23:17:35.705 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.705200 | orchestrator | 23:17:35.705 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:17:35.705207 | orchestrator | 23:17:35.705 STDOUT terraform:  } 2025-03-10 23:17:35.705229 | orchestrator | 23:17:35.705 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.705258 | orchestrator | 23:17:35.705 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:17:35.705264 | orchestrator | 23:17:35.705 STDOUT terraform:  } 2025-03-10 23:17:35.705286 | orchestrator | 23:17:35.705 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.705313 | orchestrator | 23:17:35.705 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:17:35.705319 | orchestrator | 23:17:35.705 STDOUT terraform:  } 2025-03-10 23:17:35.705346 | orchestrator | 23:17:35.705 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:17:35.705353 | orchestrator | 23:17:35.705 STDOUT terraform:  + fixed_ip { 2025-03-10 23:17:35.705382 | orchestrator | 23:17:35.705 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-03-10 23:17:35.705410 | orchestrator | 23:17:35.705 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.705417 | orchestrator | 23:17:35.705 STDOUT terraform:  } 2025-03-10 23:17:35.705423 | orchestrator | 23:17:35.705 STDOUT terraform:  } 2025-03-10 23:17:35.705474 | orchestrator | 23:17:35.705 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-03-10 23:17:35.705517 | orchestrator | 23:17:35.705 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:17:35.705552 | orchestrator | 23:17:35.705 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.705595 | orchestrator | 23:17:35.705 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:17:35.705627 | orchestrator | 23:17:35.705 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:17:35.705667 | orchestrator | 23:17:35.705 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.705702 | orchestrator | 23:17:35.705 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:17:35.705736 | orchestrator | 23:17:35.705 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:17:35.705770 | orchestrator | 23:17:35.705 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:17:35.705807 | orchestrator | 23:17:35.705 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.705843 | orchestrator | 23:17:35.705 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.705878 | orchestrator | 23:17:35.705 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:17:35.705914 | orchestrator | 23:17:35.705 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.705949 | orchestrator | 23:17:35.705 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.706413 | orchestrator | 23:17:35.705 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.706452 | orchestrator | 23:17:35.706 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.706487 | orchestrator | 23:17:35.706 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:17:35.706518 | orchestrator | 23:17:35.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.706537 | orchestrator | 23:17:35.706 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.706567 | orchestrator | 23:17:35.706 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:17:35.706595 | orchestrator | 23:17:35.706 STDOUT terraform:  } 2025-03-10 23:17:35.706618 | orchestrator | 23:17:35.706 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.706648 | orchestrator | 23:17:35.706 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:17:35.706661 | orchestrator | 23:17:35.706 STDOUT terraform:  } 2025-03-10 23:17:35.706685 | orchestrator | 23:17:35.706 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.706706 | orchestrator | 23:17:35.706 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:17:35.706713 | orchestrator | 23:17:35.706 STDOUT terraform:  } 2025-03-10 23:17:35.706733 | orchestrator | 23:17:35.706 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.706761 | orchestrator | 23:17:35.706 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:17:35.706776 | orchestrator | 23:17:35.706 STDOUT terraform:  } 2025-03-10 23:17:35.706799 | orchestrator | 23:17:35.706 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:17:35.706813 | orchestrator | 23:17:35.706 STDOUT terraform:  + fixed_ip { 2025-03-10 23:17:35.706837 | orchestrator | 23:17:35.706 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-03-10 23:17:35.706865 | orchestrator | 23:17:35.706 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.706871 | orchestrator | 23:17:35.706 STDOUT terraform:  } 2025-03-10 23:17:35.706887 | orchestrator | 23:17:35.706 STDOUT terraform:  } 2025-03-10 23:17:35.706936 | orchestrator | 23:17:35.706 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-03-10 23:17:35.706980 | orchestrator | 23:17:35.706 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:17:35.707015 | orchestrator | 23:17:35.706 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.707049 | orchestrator | 23:17:35.707 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:17:35.707083 | orchestrator | 23:17:35.707 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:17:35.707118 | orchestrator | 23:17:35.707 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.707153 | orchestrator | 23:17:35.707 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:17:35.707188 | orchestrator | 23:17:35.707 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:17:35.707222 | orchestrator | 23:17:35.707 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:17:35.707261 | orchestrator | 23:17:35.707 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.707294 | orchestrator | 23:17:35.707 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.707329 | orchestrator | 23:17:35.707 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:17:35.707364 | orchestrator | 23:17:35.707 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.707398 | orchestrator | 23:17:35.707 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.707431 | orchestrator | 23:17:35.707 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.707467 | orchestrator | 23:17:35.707 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.707500 | orchestrator | 23:17:35.707 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:17:35.707535 | orchestrator | 23:17:35.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.707555 | orchestrator | 23:17:35.707 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.707589 | orchestrator | 23:17:35.707 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:17:35.707599 | orchestrator | 23:17:35.707 STDOUT terraform:  } 2025-03-10 23:17:35.707611 | orchestrator | 23:17:35.707 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.707639 | orchestrator | 23:17:35.707 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:17:35.707647 | orchestrator | 23:17:35.707 STDOUT terraform:  } 2025-03-10 23:17:35.707668 | orchestrator | 23:17:35.707 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.707696 | orchestrator | 23:17:35.707 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:17:35.707703 | orchestrator | 23:17:35.707 STDOUT terraform:  } 2025-03-10 23:17:35.707724 | orchestrator | 23:17:35.707 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.707751 | orchestrator | 23:17:35.707 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:17:35.707758 | orchestrator | 23:17:35.707 STDOUT terraform:  } 2025-03-10 23:17:35.707783 | orchestrator | 23:17:35.707 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:17:35.707798 | orchestrator | 23:17:35.707 STDOUT terraform:  + fixed_ip { 2025-03-10 23:17:35.707821 | orchestrator | 23:17:35.707 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-03-10 23:17:35.707849 | orchestrator | 23:17:35.707 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.707856 | orchestrator | 23:17:35.707 STDOUT terraform:  } 2025-03-10 23:17:35.707871 | orchestrator | 23:17:35.707 STDOUT terraform:  } 2025-03-10 23:17:35.707915 | orchestrator | 23:17:35.707 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-03-10 23:17:35.707958 | orchestrator | 23:17:35.707 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-10 23:17:35.707994 | orchestrator | 23:17:35.707 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.708028 | orchestrator | 23:17:35.707 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-10 23:17:35.708063 | orchestrator | 23:17:35.708 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-10 23:17:35.708096 | orchestrator | 23:17:35.708 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.708132 | orchestrator | 23:17:35.708 STDOUT terraform:  + device_id = (known after apply) 2025-03-10 23:17:35.708166 | orchestrator | 23:17:35.708 STDOUT terraform:  + device_owner = (known after apply) 2025-03-10 23:17:35.708200 | orchestrator | 23:17:35.708 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-10 23:17:35.708235 | orchestrator | 23:17:35.708 STDOUT terraform:  + dns_name = (known after apply) 2025-03-10 23:17:35.708271 | orchestrator | 23:17:35.708 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.708305 | orchestrator | 23:17:35.708 STDOUT terraform:  + mac_address = (known after apply) 2025-03-10 23:17:35.708341 | orchestrator | 23:17:35.708 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.708374 | orchestrator | 23:17:35.708 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-10 23:17:35.708409 | orchestrator | 23:17:35.708 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-10 23:17:35.708444 | orchestrator | 23:17:35.708 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.708478 | orchestrator | 23:17:35.708 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-10 23:17:35.708513 | orchestrator | 23:17:35.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.708533 | orchestrator | 23:17:35.708 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.708561 | orchestrator | 23:17:35.708 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-10 23:17:35.708568 | orchestrator | 23:17:35.708 STDOUT terraform:  } 2025-03-10 23:17:35.708596 | orchestrator | 23:17:35.708 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.708625 | orchestrator | 23:17:35.708 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-10 23:17:35.708632 | orchestrator | 23:17:35.708 STDOUT terraform:  } 2025-03-10 23:17:35.708653 | orchestrator | 23:17:35.708 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.708680 | orchestrator | 23:17:35.708 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-10 23:17:35.708688 | orchestrator | 23:17:35.708 STDOUT terraform:  } 2025-03-10 23:17:35.708708 | orchestrator | 23:17:35.708 STDOUT terraform:  + allowed_address_pairs { 2025-03-10 23:17:35.708735 | orchestrator | 23:17:35.708 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-10 23:17:35.708743 | orchestrator | 23:17:35.708 STDOUT terraform:  } 2025-03-10 23:17:35.708768 | orchestrator | 23:17:35.708 STDOUT terraform:  + binding (known after apply) 2025-03-10 23:17:35.708775 | orchestrator | 23:17:35.708 STDOUT terraform:  + fixed_ip { 2025-03-10 23:17:35.708802 | orchestrator | 23:17:35.708 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-03-10 23:17:35.708832 | orchestrator | 23:17:35.708 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.708839 | orchestrator | 23:17:35.708 STDOUT terraform:  } 2025-03-10 23:17:35.708846 | orchestrator | 23:17:35.708 STDOUT terraform:  } 2025-03-10 23:17:35.708894 | orchestrator | 23:17:35.708 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-03-10 23:17:35.708942 | orchestrator | 23:17:35.708 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-03-10 23:17:35.708959 | orchestrator | 23:17:35.708 STDOUT terraform:  + force_destroy = false 2025-03-10 23:17:35.708989 | orchestrator | 23:17:35.708 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.709017 | orchestrator | 23:17:35.708 STDOUT terraform:  + port_id = (known after apply) 2025-03-10 23:17:35.709044 | orchestrator | 23:17:35.709 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.709072 | orchestrator | 23:17:35.709 STDOUT terraform:  + router_id = (known after apply) 2025-03-10 23:17:35.709099 | orchestrator | 23:17:35.709 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-10 23:17:35.709106 | orchestrator | 23:17:35.709 STDOUT terraform:  } 2025-03-10 23:17:35.709144 | orchestrator | 23:17:35.709 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-03-10 23:17:35.709179 | orchestrator | 23:17:35.709 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-03-10 23:17:35.709214 | orchestrator | 23:17:35.709 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-10 23:17:35.709249 | orchestrator | 23:17:35.709 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.709272 | orchestrator | 23:17:35.709 STDOUT terraform:  + availability_zone_hints = [ 2025-03-10 23:17:35.709286 | orchestrator | 23:17:35.709 STDOUT terraform:  + "nova", 2025-03-10 23:17:35.709294 | orchestrator | 23:17:35.709 STDOUT terraform:  ] 2025-03-10 23:17:35.709330 | orchestrator | 23:17:35.709 STDOUT terraform:  + distributed = (known after apply) 2025-03-10 23:17:35.709365 | orchestrator | 23:17:35.709 STDOUT terraform:  + enable_snat = (known after apply) 2025-03-10 23:17:35.709413 | orchestrator | 23:17:35.709 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-03-10 23:17:35.709449 | orchestrator | 23:17:35.709 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.709477 | orchestrator | 23:17:35.709 STDOUT terraform:  + name = "testbed" 2025-03-10 23:17:35.709513 | orchestrator | 23:17:35.709 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.709548 | orchestrator | 23:17:35.709 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.709585 | orchestrator | 23:17:35.709 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-03-10 23:17:35.709678 | orchestrator | 23:17:35.709 STDOUT terraform:  } 2025-03-10 23:17:35.709720 | orchestrator | 23:17:35.709 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-03-10 23:17:35.709743 | orchestrator | 23:17:35.709 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-03-10 23:17:35.709759 | orchestrator | 23:17:35.709 STDOUT terraform:  + description = "ssh" 2025-03-10 23:17:35.709773 | orchestrator | 23:17:35.709 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.709790 | orchestrator | 23:17:35.709 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.709803 | orchestrator | 23:17:35.709 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.709819 | orchestrator | 23:17:35.709 STDOUT terraform:  + port_range_max = 22 2025-03-10 23:17:35.709832 | orchestrator | 23:17:35.709 STDOUT terraform:  + port_range_min = 22 2025-03-10 23:17:35.709848 | orchestrator | 23:17:35.709 STDOUT terraform:  + protocol = "tcp" 2025-03-10 23:17:35.709864 | orchestrator | 23:17:35.709 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.709891 | orchestrator | 23:17:35.709 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.709907 | orchestrator | 23:17:35.709 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:17:35.709945 | orchestrator | 23:17:35.709 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.709962 | orchestrator | 23:17:35.709 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.709977 | orchestrator | 23:17:35.709 STDOUT terraform:  } 2025-03-10 23:17:35.710069 | orchestrator | 23:17:35.709 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-03-10 23:17:35.710101 | orchestrator | 23:17:35.710 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-03-10 23:17:35.710117 | orchestrator | 23:17:35.710 STDOUT terraform:  + description = "wireguard" 2025-03-10 23:17:35.710132 | orchestrator | 23:17:35.710 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.710148 | orchestrator | 23:17:35.710 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.710173 | orchestrator | 23:17:35.710 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.710188 | orchestrator | 23:17:35.710 STDOUT terraform:  + port_range_max = 51820 2025-03-10 23:17:35.710204 | orchestrator | 23:17:35.710 STDOUT terraform:  + port_range_min = 51820 2025-03-10 23:17:35.710219 | orchestrator | 23:17:35.710 STDOUT terraform:  + protocol = "udp" 2025-03-10 23:17:35.710257 | orchestrator | 23:17:35.710 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.710274 | orchestrator | 23:17:35.710 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.710305 | orchestrator | 23:17:35.710 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:17:35.710338 | orchestrator | 23:17:35.710 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.710354 | orchestrator | 23:17:35.710 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.710369 | orchestrator | 23:17:35.710 STDOUT terraform:  } 2025-03-10 23:17:35.710489 | orchestrator | 23:17:35.710 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-03-10 23:17:35.710531 | orchestrator | 23:17:35.710 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-03-10 23:17:35.710545 | orchestrator | 23:17:35.710 STDOUT terraform:  + direction 2025-03-10 23:17:35.710561 | orchestrator | 23:17:35.710 STDOUT terraform:  = "ingress" 2025-03-10 23:17:35.710574 | orchestrator | 23:17:35.710 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.710612 | orchestrator | 23:17:35.710 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.710625 | orchestrator | 23:17:35.710 STDOUT terraform:  + protocol = "tcp" 2025-03-10 23:17:35.710640 | orchestrator | 23:17:35.710 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.710678 | orchestrator | 23:17:35.710 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.710694 | orchestrator | 23:17:35.710 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-10 23:17:35.710733 | orchestrator | 23:17:35.710 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.710750 | orchestrator | 23:17:35.710 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.710795 | orchestrator | 23:17:35.710 STDOUT terraform:  } 2025-03-10 23:17:35.710812 | orchestrator | 23:17:35.710 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-03-10 23:17:35.710894 | orchestrator | 23:17:35.710 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-03-10 23:17:35.710922 | orchestrator | 23:17:35.710 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.710935 | orchestrator | 23:17:35.710 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.710948 | orchestrator | 23:17:35.710 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.710963 | orchestrator | 23:17:35.710 STDOUT terraform:  + protocol = "udp" 2025-03-10 23:17:35.710983 | orchestrator | 23:17:35.710 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.710999 | orchestrator | 23:17:35.710 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.711014 | orchestrator | 23:17:35.710 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-10 23:17:35.711049 | orchestrator | 23:17:35.711 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.711064 | orchestrator | 23:17:35.711 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.711080 | orchestrator | 23:17:35.711 STDOUT terraform:  } 2025-03-10 23:17:35.711131 | orchestrator | 23:17:35.711 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-03-10 23:17:35.711183 | orchestrator | 23:17:35.711 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-03-10 23:17:35.711210 | orchestrator | 23:17:35.711 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.711247 | orchestrator | 23:17:35.711 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.711263 | orchestrator | 23:17:35.711 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.711294 | orchestrator | 23:17:35.711 STDOUT terraform:  + protocol = "icmp" 2025-03-10 23:17:35.711310 | orchestrator | 23:17:35.711 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.711326 | orchestrator | 23:17:35.711 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.711341 | orchestrator | 23:17:35.711 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:17:35.711374 | orchestrator | 23:17:35.711 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.711390 | orchestrator | 23:17:35.711 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.711405 | orchestrator | 23:17:35.711 STDOUT terraform:  } 2025-03-10 23:17:35.711454 | orchestrator | 23:17:35.711 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-03-10 23:17:35.711505 | orchestrator | 23:17:35.711 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-03-10 23:17:35.711522 | orchestrator | 23:17:35.711 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.711537 | orchestrator | 23:17:35.711 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.711570 | orchestrator | 23:17:35.711 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.711601 | orchestrator | 23:17:35.711 STDOUT terraform:  + protocol = "tcp" 2025-03-10 23:17:35.711635 | orchestrator | 23:17:35.711 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.711657 | orchestrator | 23:17:35.711 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.711672 | orchestrator | 23:17:35.711 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:17:35.711709 | orchestrator | 23:17:35.711 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.711742 | orchestrator | 23:17:35.711 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.711795 | orchestrator | 23:17:35.711 STDOUT terraform:  } 2025-03-10 23:17:35.711811 | orchestrator | 23:17:35.711 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-03-10 23:17:35.711848 | orchestrator | 23:17:35.711 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-03-10 23:17:35.711864 | orchestrator | 23:17:35.711 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.711879 | orchestrator | 23:17:35.711 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.711914 | orchestrator | 23:17:35.711 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.711930 | orchestrator | 23:17:35.711 STDOUT terraform:  + protocol = "udp" 2025-03-10 23:17:35.711963 | orchestrator | 23:17:35.711 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.711979 | orchestrator | 23:17:35.711 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.712010 | orchestrator | 23:17:35.711 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:17:35.712043 | orchestrator | 23:17:35.711 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.712058 | orchestrator | 23:17:35.712 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.712073 | orchestrator | 23:17:35.712 STDOUT terraform:  } 2025-03-10 23:17:35.712124 | orchestrator | 23:17:35.712 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-03-10 23:17:35.712173 | orchestrator | 23:17:35.712 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-03-10 23:17:35.712190 | orchestrator | 23:17:35.712 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.712205 | orchestrator | 23:17:35.712 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.712241 | orchestrator | 23:17:35.712 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.712257 | orchestrator | 23:17:35.712 STDOUT terraform:  + protocol = "icmp" 2025-03-10 23:17:35.712288 | orchestrator | 23:17:35.712 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.712321 | orchestrator | 23:17:35.712 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.712337 | orchestrator | 23:17:35.712 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:17:35.712369 | orchestrator | 23:17:35.712 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.712395 | orchestrator | 23:17:35.712 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.712411 | orchestrator | 23:17:35.712 STDOUT terraform:  } 2025-03-10 23:17:35.712456 | orchestrator | 23:17:35.712 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-03-10 23:17:35.712504 | orchestrator | 23:17:35.712 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-03-10 23:17:35.712522 | orchestrator | 23:17:35.712 STDOUT terraform:  + description = "vrrp" 2025-03-10 23:17:35.712538 | orchestrator | 23:17:35.712 STDOUT terraform:  + direction = "ingress" 2025-03-10 23:17:35.712553 | orchestrator | 23:17:35.712 STDOUT terraform:  + ethertype = "IPv4" 2025-03-10 23:17:35.712606 | orchestrator | 23:17:35.712 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.712623 | orchestrator | 23:17:35.712 STDOUT terraform:  + protocol = "112" 2025-03-10 23:17:35.712707 | orchestrator | 23:17:35.712 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.712738 | orchestrator | 23:17:35.712 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-10 23:17:35.712746 | orchestrator | 23:17:35.712 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-10 23:17:35.712754 | orchestrator | 23:17:35.712 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-10 23:17:35.712760 | orchestrator | 23:17:35.712 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.712767 | orchestrator | 23:17:35.712 STDOUT terraform:  } 2025-03-10 23:17:35.712816 | orchestrator | 23:17:35.712 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-03-10 23:17:35.712862 | orchestrator | 23:17:35.712 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-03-10 23:17:35.712890 | orchestrator | 23:17:35.712 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.712922 | orchestrator | 23:17:35.712 STDOUT terraform:  + description = "management security group" 2025-03-10 23:17:35.712949 | orchestrator | 23:17:35.712 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.712976 | orchestrator | 23:17:35.712 STDOUT terraform:  + name = "testbed-management" 2025-03-10 23:17:35.713004 | orchestrator | 23:17:35.712 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.713031 | orchestrator | 23:17:35.713 STDOUT terraform:  + stateful = (known after apply) 2025-03-10 23:17:35.713058 | orchestrator | 23:17:35.713 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.713065 | orchestrator | 23:17:35.713 STDOUT terraform:  } 2025-03-10 23:17:35.713111 | orchestrator | 23:17:35.713 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-03-10 23:17:35.713156 | orchestrator | 23:17:35.713 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-03-10 23:17:35.713184 | orchestrator | 23:17:35.713 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.713213 | orchestrator | 23:17:35.713 STDOUT terraform:  + description = "node security group" 2025-03-10 23:17:35.713239 | orchestrator | 23:17:35.713 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.713262 | orchestrator | 23:17:35.713 STDOUT terraform:  + name = "testbed-node" 2025-03-10 23:17:35.713289 | orchestrator | 23:17:35.713 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.713317 | orchestrator | 23:17:35.713 STDOUT terraform:  + stateful = (known after apply) 2025-03-10 23:17:35.713344 | orchestrator | 23:17:35.713 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.713358 | orchestrator | 23:17:35.713 STDOUT terraform:  } 2025-03-10 23:17:35.713400 | orchestrator | 23:17:35.713 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-03-10 23:17:35.713443 | orchestrator | 23:17:35.713 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-03-10 23:17:35.713473 | orchestrator | 23:17:35.713 STDOUT terraform:  + all_tags = (known after apply) 2025-03-10 23:17:35.713503 | orchestrator | 23:17:35.713 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-03-10 23:17:35.713522 | orchestrator | 23:17:35.713 STDOUT terraform:  + dns_nameservers = [ 2025-03-10 23:17:35.713538 | orchestrator | 23:17:35.713 STDOUT terraform:  + "8.8.8.8", 2025-03-10 23:17:35.713554 | orchestrator | 23:17:35.713 STDOUT terraform:  + "9.9.9.9", 2025-03-10 23:17:35.713568 | orchestrator | 23:17:35.713 STDOUT terraform:  ] 2025-03-10 23:17:35.713600 | orchestrator | 23:17:35.713 STDOUT terraform:  + enable_dhcp = true 2025-03-10 23:17:35.713630 | orchestrator | 23:17:35.713 STDOUT terraform:  + gateway_ip = (known after apply) 2025-03-10 23:17:35.713660 | orchestrator | 23:17:35.713 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.713679 | orchestrator | 23:17:35.713 STDOUT terraform:  + ip_version = 4 2025-03-10 23:17:35.713708 | orchestrator | 23:17:35.713 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-03-10 23:17:35.713737 | orchestrator | 23:17:35.713 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-03-10 23:17:35.713773 | orchestrator | 23:17:35.713 STDOUT terraform:  + name = "subnet-testbed-management" 2025-03-10 23:17:35.713802 | orchestrator | 23:17:35.713 STDOUT terraform:  + network_id = (known after apply) 2025-03-10 23:17:35.713823 | orchestrator | 23:17:35.713 STDOUT terraform:  + no_gateway = false 2025-03-10 23:17:35.713852 | orchestrator | 23:17:35.713 STDOUT terraform:  + region = (known after apply) 2025-03-10 23:17:35.713881 | orchestrator | 23:17:35.713 STDOUT terraform:  + service_types = (known after apply) 2025-03-10 23:17:35.713910 | orchestrator | 23:17:35.713 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-10 23:17:35.713929 | orchestrator | 23:17:35.713 STDOUT terraform:  + allocation_pool { 2025-03-10 23:17:35.713952 | orchestrator | 23:17:35.713 STDOUT terraform:  + end = "192.168.31.250" 2025-03-10 23:17:35.713976 | orchestrator | 23:17:35.713 STDOUT terraform:  + start = "192.168.31.200" 2025-03-10 23:17:35.713990 | orchestrator | 23:17:35.713 STDOUT terraform:  } 2025-03-10 23:17:35.713996 | orchestrator | 23:17:35.713 STDOUT terraform:  } 2025-03-10 23:17:35.714032 | orchestrator | 23:17:35.713 STDOUT terraform:  # terraform_data.image will be created 2025-03-10 23:17:35.714056 | orchestrator | 23:17:35.714 STDOUT terraform:  + resource "terraform_data" "image" { 2025-03-10 23:17:35.714079 | orchestrator | 23:17:35.714 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.714099 | orchestrator | 23:17:35.714 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-10 23:17:35.714122 | orchestrator | 23:17:35.714 STDOUT terraform:  + output = (known after apply) 2025-03-10 23:17:35.714136 | orchestrator | 23:17:35.714 STDOUT terraform:  } 2025-03-10 23:17:35.714162 | orchestrator | 23:17:35.714 STDOUT terraform:  # terraform_data.image_node will be created 2025-03-10 23:17:35.714190 | orchestrator | 23:17:35.714 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-03-10 23:17:35.714213 | orchestrator | 23:17:35.714 STDOUT terraform:  + id = (known after apply) 2025-03-10 23:17:35.714233 | orchestrator | 23:17:35.714 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-10 23:17:35.714266 | orchestrator | 23:17:35.714 STDOUT terraform:  + output = (known after apply) 2025-03-10 23:17:35.714279 | orchestrator | 23:17:35.714 STDOUT terraform:  } 2025-03-10 23:17:35.714307 | orchestrator | 23:17:35.714 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-03-10 23:17:35.714321 | orchestrator | 23:17:35.714 STDOUT terraform: Changes to Outputs: 2025-03-10 23:17:35.714345 | orchestrator | 23:17:35.714 STDOUT terraform:  + manager_address = (sensitive value) 2025-03-10 23:17:35.714369 | orchestrator | 23:17:35.714 STDOUT terraform:  + private_key = (sensitive value) 2025-03-10 23:17:35.883111 | orchestrator | 23:17:35.882 STDOUT terraform: terraform_data.image: Creating... 2025-03-10 23:17:35.893004 | orchestrator | 23:17:35.882 STDOUT terraform: terraform_data.image_node: Creating... 2025-03-10 23:17:35.893054 | orchestrator | 23:17:35.882 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=fec74f68-a269-c085-30c5-78f8ff941e68] 2025-03-10 23:17:35.893064 | orchestrator | 23:17:35.882 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=8320c14b-8cea-16d2-62ec-6896f9d53c96] 2025-03-10 23:17:35.893081 | orchestrator | 23:17:35.892 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-03-10 23:17:35.897564 | orchestrator | 23:17:35.897 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-03-10 23:17:35.900843 | orchestrator | 23:17:35.900 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-03-10 23:17:35.903175 | orchestrator | 23:17:35.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-03-10 23:17:35.904245 | orchestrator | 23:17:35.902 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-03-10 23:17:35.904267 | orchestrator | 23:17:35.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-03-10 23:17:35.904273 | orchestrator | 23:17:35.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-03-10 23:17:35.904278 | orchestrator | 23:17:35.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-03-10 23:17:35.904283 | orchestrator | 23:17:35.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-03-10 23:17:35.904292 | orchestrator | 23:17:35.904 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-03-10 23:17:36.319475 | orchestrator | 23:17:36.319 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-10 23:17:36.328908 | orchestrator | 23:17:36.328 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-03-10 23:17:36.333256 | orchestrator | 23:17:36.333 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-10 23:17:36.339313 | orchestrator | 23:17:36.339 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-03-10 23:17:41.878795 | orchestrator | 23:17:41.878 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=58955372-bd49-4b9d-9554-e56b64bf35a2] 2025-03-10 23:17:41.888134 | orchestrator | 23:17:41.887 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-03-10 23:17:45.901925 | orchestrator | 23:17:45.901 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-03-10 23:17:45.903454 | orchestrator | 23:17:45.902 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-03-10 23:17:45.903989 | orchestrator | 23:17:45.903 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-03-10 23:17:45.904020 | orchestrator | 23:17:45.903 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-03-10 23:17:45.904058 | orchestrator | 23:17:45.903 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-03-10 23:17:45.905453 | orchestrator | 23:17:45.903 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-03-10 23:17:45.905489 | orchestrator | 23:17:45.905 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-03-10 23:17:46.330722 | orchestrator | 23:17:46.330 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-03-10 23:17:46.340352 | orchestrator | 23:17:46.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-03-10 23:17:46.476004 | orchestrator | 23:17:46.475 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=6ce862c5-1280-46ce-a44b-7fdf993418a7] 2025-03-10 23:17:46.483077 | orchestrator | 23:17:46.482 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-03-10 23:17:46.488433 | orchestrator | 23:17:46.488 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=198b2359-1b9f-4b26-9f87-5ad23fedee83] 2025-03-10 23:17:46.497157 | orchestrator | 23:17:46.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-03-10 23:17:46.524306 | orchestrator | 23:17:46.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=15619a47-82d1-4686-ab95-9df9f8c2e13f] 2025-03-10 23:17:46.528749 | orchestrator | 23:17:46.528 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-03-10 23:17:46.534706 | orchestrator | 23:17:46.534 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 11s [id=4a074a62-8b02-498d-8d1e-97a298b60d07] 2025-03-10 23:17:46.541017 | orchestrator | 23:17:46.540 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-03-10 23:17:46.581957 | orchestrator | 23:17:46.581 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=663fc4ce-d59c-4a76-8f0a-41179b606a99] 2025-03-10 23:17:46.582837 | orchestrator | 23:17:46.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=69965309-039b-44ae-a6a8-6a48204034b9] 2025-03-10 23:17:46.586347 | orchestrator | 23:17:46.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-03-10 23:17:46.591511 | orchestrator | 23:17:46.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-03-10 23:17:46.608753 | orchestrator | 23:17:46.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=e87dbd84-35d6-4b7e-85dc-79bdef85b968] 2025-03-10 23:17:46.609670 | orchestrator | 23:17:46.609 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=16b61174-5b22-4bfc-bb4c-78d6ccb86350] 2025-03-10 23:17:46.618727 | orchestrator | 23:17:46.618 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=a639b430-ffa9-4507-998f-3db9f9caeda1] 2025-03-10 23:17:46.620285 | orchestrator | 23:17:46.620 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-03-10 23:17:46.621772 | orchestrator | 23:17:46.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-03-10 23:17:46.626441 | orchestrator | 23:17:46.626 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-03-10 23:17:46.955278 | orchestrator | 23:17:46.954 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-03-10 23:17:46.964755 | orchestrator | 23:17:46.964 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-03-10 23:17:51.891404 | orchestrator | 23:17:51.891 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-03-10 23:17:52.050323 | orchestrator | 23:17:52.049 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=96f3a3bc-1bc2-4311-aa73-ad4d834104c1] 2025-03-10 23:17:52.057671 | orchestrator | 23:17:52.057 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-03-10 23:17:56.483740 | orchestrator | 23:17:56.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-03-10 23:17:56.498782 | orchestrator | 23:17:56.498 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-03-10 23:17:56.530072 | orchestrator | 23:17:56.529 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-03-10 23:17:56.542234 | orchestrator | 23:17:56.541 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-03-10 23:17:56.587527 | orchestrator | 23:17:56.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-03-10 23:17:56.592748 | orchestrator | 23:17:56.592 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-03-10 23:17:56.621021 | orchestrator | 23:17:56.620 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-03-10 23:17:56.623119 | orchestrator | 23:17:56.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-03-10 23:17:56.655148 | orchestrator | 23:17:56.654 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=767faefc-3683-4105-addb-6ba0af8112be] 2025-03-10 23:17:56.668850 | orchestrator | 23:17:56.668 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=e67b8ef9-ccfd-41d6-a59d-192659415a34] 2025-03-10 23:17:56.669665 | orchestrator | 23:17:56.669 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-03-10 23:17:56.676507 | orchestrator | 23:17:56.676 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-03-10 23:17:56.716945 | orchestrator | 23:17:56.716 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=8f8c648b-56e1-45e0-bc37-3aa283872edf] 2025-03-10 23:17:56.730177 | orchestrator | 23:17:56.729 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-03-10 23:17:56.731256 | orchestrator | 23:17:56.731 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=3a54aab5-53b8-4264-89d4-baa19ed5d083] 2025-03-10 23:17:56.737250 | orchestrator | 23:17:56.737 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-03-10 23:17:56.773346 | orchestrator | 23:17:56.772 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=e1e42b39-68bf-425f-b55c-5a5d9a05f03d] 2025-03-10 23:17:56.783847 | orchestrator | 23:17:56.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-03-10 23:17:56.808863 | orchestrator | 23:17:56.808 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=7d415c3e-952f-4033-b652-0179fc8d375d] 2025-03-10 23:17:56.815189 | orchestrator | 23:17:56.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-03-10 23:17:56.822001 | orchestrator | 23:17:56.821 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=35890ba3-27d1-4ca1-853f-43468bc69b0e] 2025-03-10 23:17:56.822582 | orchestrator | 23:17:56.822 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=7ad77790-9240-4bf7-8fbd-881e22f1e07b] 2025-03-10 23:17:56.835825 | orchestrator | 23:17:56.835 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-03-10 23:17:56.842158 | orchestrator | 23:17:56.841 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=6e0f6a98d47cb540b3d5ee27694f899dce44bcee] 2025-03-10 23:17:56.843225 | orchestrator | 23:17:56.843 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-03-10 23:17:56.848416 | orchestrator | 23:17:56.848 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=a6c9a8eb380b664911702747376e3bc16ada6c19] 2025-03-10 23:17:56.965530 | orchestrator | 23:17:56.965 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-03-10 23:17:57.275443 | orchestrator | 23:17:57.275 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=3ec363f6-5a87-4faa-b66a-09e81891e9fd] 2025-03-10 23:18:02.059346 | orchestrator | 23:18:02.059 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-03-10 23:18:02.344975 | orchestrator | 23:18:02.344 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=c5cabe24-ee98-459d-9f8d-6dc8ed6a2256] 2025-03-10 23:18:02.360939 | orchestrator | 23:18:02.360 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-03-10 23:18:02.376763 | orchestrator | 23:18:02.376 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=2c66825c-4af4-4039-9be2-0884ea12c780] 2025-03-10 23:18:06.677618 | orchestrator | 23:18:06.677 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-03-10 23:18:06.730806 | orchestrator | 23:18:06.730 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-03-10 23:18:06.737805 | orchestrator | 23:18:06.737 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-03-10 23:18:06.785206 | orchestrator | 23:18:06.785 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-03-10 23:18:06.815264 | orchestrator | 23:18:06.815 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-03-10 23:18:07.038607 | orchestrator | 23:18:07.038 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=d67a3c24-b729-45e0-8397-e020ae3d0e20] 2025-03-10 23:18:07.081929 | orchestrator | 23:18:07.081 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=9511e8b6-3e60-417c-9ca1-6d08f1d83634] 2025-03-10 23:18:07.107573 | orchestrator | 23:18:07.107 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=89492e79-5deb-49f7-a1a7-185d4ce5c08c] 2025-03-10 23:18:07.131004 | orchestrator | 23:18:07.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=e02395b5-31e4-4774-88e5-7a8793530918] 2025-03-10 23:18:07.154286 | orchestrator | 23:18:07.153 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=be55fef4-fc96-4f23-a232-d530c0f50905] 2025-03-10 23:18:08.878579 | orchestrator | 23:18:08.878 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=8b35d142-0a1c-41b0-ac84-2104148655b0] 2025-03-10 23:18:08.884006 | orchestrator | 23:18:08.883 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-03-10 23:18:08.884608 | orchestrator | 23:18:08.884 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-03-10 23:18:08.885795 | orchestrator | 23:18:08.885 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-03-10 23:18:09.026176 | orchestrator | 23:18:09.025 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b883536b-6f4d-4ca5-8fb3-8d79655089b7] 2025-03-10 23:18:09.032865 | orchestrator | 23:18:09.032 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-03-10 23:18:09.039445 | orchestrator | 23:18:09.039 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=fb6b85e6-cfa0-4d9b-a269-01fe7cd72e31] 2025-03-10 23:18:09.041057 | orchestrator | 23:18:09.040 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-03-10 23:18:09.042841 | orchestrator | 23:18:09.042 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-03-10 23:18:09.043655 | orchestrator | 23:18:09.043 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-03-10 23:18:09.053250 | orchestrator | 23:18:09.053 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-03-10 23:18:09.053834 | orchestrator | 23:18:09.053 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-03-10 23:18:09.055429 | orchestrator | 23:18:09.055 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-03-10 23:18:09.059722 | orchestrator | 23:18:09.059 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-03-10 23:18:09.060591 | orchestrator | 23:18:09.060 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-03-10 23:18:09.421515 | orchestrator | 23:18:09.421 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=156dcf86-a73c-4e1d-a458-8d31df1b3697] 2025-03-10 23:18:09.434440 | orchestrator | 23:18:09.434 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-03-10 23:18:09.572657 | orchestrator | 23:18:09.572 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=be7a42dc-fda6-4c19-99e3-dc506639b851] 2025-03-10 23:18:09.581662 | orchestrator | 23:18:09.581 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-03-10 23:18:09.688463 | orchestrator | 23:18:09.688 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=2cffe82c-d9f1-45a6-a60c-a9b6ed663366] 2025-03-10 23:18:09.699192 | orchestrator | 23:18:09.698 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-03-10 23:18:09.756597 | orchestrator | 23:18:09.756 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=d8535fdf-56f2-4b8a-9bf6-80b2d4c3528d] 2025-03-10 23:18:09.762645 | orchestrator | 23:18:09.762 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-03-10 23:18:09.804920 | orchestrator | 23:18:09.804 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=a0330138-4234-4610-82fa-950a1ca5a475] 2025-03-10 23:18:09.813693 | orchestrator | 23:18:09.813 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-03-10 23:18:10.009209 | orchestrator | 23:18:10.008 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=bcdf8779-2e13-4443-9b39-21237451dc24] 2025-03-10 23:18:10.017642 | orchestrator | 23:18:10.017 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-03-10 23:18:10.119259 | orchestrator | 23:18:10.118 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=9cf032bc-80b6-4ba8-8e82-5c4c1721eecf] 2025-03-10 23:18:10.130433 | orchestrator | 23:18:10.130 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-03-10 23:18:10.240193 | orchestrator | 23:18:10.239 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=08206f08-6541-4fcf-a8a9-868a50eda1a5] 2025-03-10 23:18:10.355998 | orchestrator | 23:18:10.355 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=23670351-47b8-44de-9fd8-167c56473845] 2025-03-10 23:18:14.651826 | orchestrator | 23:18:14.651 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=e60803b5-e604-4456-812e-dc25da7bb5d5] 2025-03-10 23:18:14.716184 | orchestrator | 23:18:14.715 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=cbc115d4-afc1-45a7-bf38-5cdbcad5deaf] 2025-03-10 23:18:14.759806 | orchestrator | 23:18:14.759 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=19cedf18-35a1-4855-8ad3-d16134ae9f2c] 2025-03-10 23:18:14.895420 | orchestrator | 23:18:14.895 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=3bd7ac0b-e693-4b77-ac15-aed82ba839e1] 2025-03-10 23:18:14.985316 | orchestrator | 23:18:14.985 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=4acecba4-7df9-453c-a360-27934f460b78] 2025-03-10 23:18:15.185661 | orchestrator | 23:18:15.185 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=55f10b90-d868-4c31-bd97-85244ef93c06] 2025-03-10 23:18:15.783808 | orchestrator | 23:18:15.783 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=86968f54-3f57-47e2-afbf-8a2b1f8955da] 2025-03-10 23:18:16.662152 | orchestrator | 23:18:16.661 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=c1344604-904a-44ec-b173-b274e1567988] 2025-03-10 23:18:16.679419 | orchestrator | 23:18:16.679 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-03-10 23:18:16.701561 | orchestrator | 23:18:16.696 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-03-10 23:18:16.702273 | orchestrator | 23:18:16.702 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-03-10 23:18:16.710232 | orchestrator | 23:18:16.710 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-03-10 23:18:16.713637 | orchestrator | 23:18:16.713 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-03-10 23:18:16.716761 | orchestrator | 23:18:16.716 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-03-10 23:18:16.722040 | orchestrator | 23:18:16.721 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-03-10 23:18:22.957130 | orchestrator | 23:18:22.956 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=019c80d5-e747-4cf9-b9e5-f14f5631521a] 2025-03-10 23:18:22.968252 | orchestrator | 23:18:22.968 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-03-10 23:18:22.973627 | orchestrator | 23:18:22.973 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-03-10 23:18:22.975784 | orchestrator | 23:18:22.975 STDOUT terraform: local_file.inventory: Creating... 2025-03-10 23:18:22.980592 | orchestrator | 23:18:22.980 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=e7b5bfa78c40ae24d94246393d750cce4b2674b5] 2025-03-10 23:18:22.981223 | orchestrator | 23:18:22.981 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=7e413b440a00470a9238bec1315e4451f0e19ce0] 2025-03-10 23:18:23.420256 | orchestrator | 23:18:23.419 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=019c80d5-e747-4cf9-b9e5-f14f5631521a] 2025-03-10 23:18:26.699641 | orchestrator | 23:18:26.699 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-03-10 23:18:26.703772 | orchestrator | 23:18:26.703 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-03-10 23:18:26.710869 | orchestrator | 23:18:26.710 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-03-10 23:18:26.715171 | orchestrator | 23:18:26.714 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-03-10 23:18:26.717634 | orchestrator | 23:18:26.717 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-03-10 23:18:26.724888 | orchestrator | 23:18:26.724 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-03-10 23:18:36.700977 | orchestrator | 23:18:36.700 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-03-10 23:18:36.703922 | orchestrator | 23:18:36.703 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-03-10 23:18:36.712101 | orchestrator | 23:18:36.711 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-03-10 23:18:36.715244 | orchestrator | 23:18:36.715 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-03-10 23:18:36.717913 | orchestrator | 23:18:36.717 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-03-10 23:18:36.725161 | orchestrator | 23:18:36.724 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-03-10 23:18:37.026379 | orchestrator | 23:18:37.025 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=7d5c2880-0b76-4637-841d-4585b622baf4] 2025-03-10 23:18:37.094415 | orchestrator | 23:18:37.094 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=ce93970f-f776-4608-aa07-2eee542088e2] 2025-03-10 23:18:37.117330 | orchestrator | 23:18:37.117 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=64cd1a4e-02c2-41e6-8c17-8a0b2301efdd] 2025-03-10 23:18:46.713496 | orchestrator | 23:18:46.713 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-03-10 23:18:46.715407 | orchestrator | 23:18:46.715 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-03-10 23:18:46.725905 | orchestrator | 23:18:46.725 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-03-10 23:18:47.292469 | orchestrator | 23:18:47.292 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=d120967b-90ae-430f-a122-7f5c7293d9cc] 2025-03-10 23:18:47.354218 | orchestrator | 23:18:47.353 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=3193d475-2cf2-4d0c-a01e-f28249417ff7] 2025-03-10 23:18:47.367852 | orchestrator | 23:18:47.367 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=636815f3-f0e1-4814-a5f5-2cef8e599dd8] 2025-03-10 23:18:47.390227 | orchestrator | 23:18:47.390 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-03-10 23:18:47.402441 | orchestrator | 23:18:47.402 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=161366679923360912] 2025-03-10 23:18:47.402871 | orchestrator | 23:18:47.402 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-03-10 23:18:47.403468 | orchestrator | 23:18:47.403 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-03-10 23:18:47.405344 | orchestrator | 23:18:47.405 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-03-10 23:18:47.406546 | orchestrator | 23:18:47.406 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-03-10 23:18:47.408834 | orchestrator | 23:18:47.406 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-03-10 23:18:47.408868 | orchestrator | 23:18:47.408 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-03-10 23:18:47.415813 | orchestrator | 23:18:47.415 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-03-10 23:18:47.427444 | orchestrator | 23:18:47.427 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-03-10 23:18:47.431135 | orchestrator | 23:18:47.430 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-03-10 23:18:47.431355 | orchestrator | 23:18:47.431 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-03-10 23:18:52.739202 | orchestrator | 23:18:52.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=7d5c2880-0b76-4637-841d-4585b622baf4/15619a47-82d1-4686-ab95-9df9f8c2e13f] 2025-03-10 23:18:52.754274 | orchestrator | 23:18:52.754 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-03-10 23:18:52.764801 | orchestrator | 23:18:52.764 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=64cd1a4e-02c2-41e6-8c17-8a0b2301efdd/35890ba3-27d1-4ca1-853f-43468bc69b0e] 2025-03-10 23:18:52.774039 | orchestrator | 23:18:52.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=d120967b-90ae-430f-a122-7f5c7293d9cc/16b61174-5b22-4bfc-bb4c-78d6ccb86350] 2025-03-10 23:18:52.775258 | orchestrator | 23:18:52.775 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-03-10 23:18:52.787650 | orchestrator | 23:18:52.787 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 6s [id=3193d475-2cf2-4d0c-a01e-f28249417ff7/198b2359-1b9f-4b26-9f87-5ad23fedee83] 2025-03-10 23:18:52.788180 | orchestrator | 23:18:52.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-03-10 23:18:52.791466 | orchestrator | 23:18:52.791 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=636815f3-f0e1-4814-a5f5-2cef8e599dd8/3a54aab5-53b8-4264-89d4-baa19ed5d083] 2025-03-10 23:18:52.792290 | orchestrator | 23:18:52.792 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=ce93970f-f776-4608-aa07-2eee542088e2/6ce862c5-1280-46ce-a44b-7fdf993418a7] 2025-03-10 23:18:52.803421 | orchestrator | 23:18:52.803 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=7d5c2880-0b76-4637-841d-4585b622baf4/e1e42b39-68bf-425f-b55c-5a5d9a05f03d] 2025-03-10 23:18:52.809408 | orchestrator | 23:18:52.809 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=64cd1a4e-02c2-41e6-8c17-8a0b2301efdd/96f3a3bc-1bc2-4311-aa73-ad4d834104c1] 2025-03-10 23:18:52.809763 | orchestrator | 23:18:52.809 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-03-10 23:18:52.810179 | orchestrator | 23:18:52.810 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=3193d475-2cf2-4d0c-a01e-f28249417ff7/a639b430-ffa9-4507-998f-3db9f9caeda1] 2025-03-10 23:18:52.810567 | orchestrator | 23:18:52.810 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-03-10 23:18:52.813870 | orchestrator | 23:18:52.813 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-03-10 23:18:52.814261 | orchestrator | 23:18:52.814 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-03-10 23:18:52.823398 | orchestrator | 23:18:52.823 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-03-10 23:18:52.829189 | orchestrator | 23:18:52.829 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 6s [id=636815f3-f0e1-4814-a5f5-2cef8e599dd8/8f8c648b-56e1-45e0-bc37-3aa283872edf] 2025-03-10 23:18:52.833010 | orchestrator | 23:18:52.832 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-03-10 23:18:58.050171 | orchestrator | 23:18:58.049 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=ce93970f-f776-4608-aa07-2eee542088e2/7ad77790-9240-4bf7-8fbd-881e22f1e07b] 2025-03-10 23:18:58.366925 | orchestrator | 23:18:58.366 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=7d5c2880-0b76-4637-841d-4585b622baf4/69965309-039b-44ae-a6a8-6a48204034b9] 2025-03-10 23:18:58.367786 | orchestrator | 23:18:58.367 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=d120967b-90ae-430f-a122-7f5c7293d9cc/e67b8ef9-ccfd-41d6-a59d-192659415a34] 2025-03-10 23:18:58.368665 | orchestrator | 23:18:58.368 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=ce93970f-f776-4608-aa07-2eee542088e2/4a074a62-8b02-498d-8d1e-97a298b60d07] 2025-03-10 23:18:58.369351 | orchestrator | 23:18:58.369 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=d120967b-90ae-430f-a122-7f5c7293d9cc/767faefc-3683-4105-addb-6ba0af8112be] 2025-03-10 23:18:58.370154 | orchestrator | 23:18:58.369 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=636815f3-f0e1-4814-a5f5-2cef8e599dd8/e87dbd84-35d6-4b7e-85dc-79bdef85b968] 2025-03-10 23:18:58.370768 | orchestrator | 23:18:58.370 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=3193d475-2cf2-4d0c-a01e-f28249417ff7/7d415c3e-952f-4033-b652-0179fc8d375d] 2025-03-10 23:18:58.371019 | orchestrator | 23:18:58.370 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=64cd1a4e-02c2-41e6-8c17-8a0b2301efdd/663fc4ce-d59c-4a76-8f0a-41179b606a99] 2025-03-10 23:19:02.835352 | orchestrator | 23:19:02.835 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-03-10 23:19:12.838378 | orchestrator | 23:19:12.838 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-03-10 23:19:13.389192 | orchestrator | 23:19:13.388 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=b9dee520-1dd3-4f26-a307-74376af809ba] 2025-03-10 23:19:13.405506 | orchestrator | 23:19:13.405 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-03-10 23:19:13.405611 | orchestrator | 23:19:13.405 STDOUT terraform: Outputs: 2025-03-10 23:19:13.405658 | orchestrator | 23:19:13.405 STDOUT terraform: manager_address = 2025-03-10 23:19:13.405733 | orchestrator | 23:19:13.405 STDOUT terraform: private_key = 2025-03-10 23:19:13.666828 | orchestrator | changed 2025-03-10 23:19:13.703945 | 2025-03-10 23:19:13.704054 | TASK [Create infrastructure (stable)] 2025-03-10 23:19:13.812141 | orchestrator | skipping: Conditional result was False 2025-03-10 23:19:13.830581 | 2025-03-10 23:19:13.830733 | TASK [Fetch manager address] 2025-03-10 23:19:24.255383 | orchestrator | ok 2025-03-10 23:19:24.273605 | 2025-03-10 23:19:24.273752 | TASK [Set manager_host address] 2025-03-10 23:19:24.380756 | orchestrator | ok 2025-03-10 23:19:24.393078 | 2025-03-10 23:19:24.393207 | LOOP [Update ansible collections] 2025-03-10 23:19:25.143670 | orchestrator | changed 2025-03-10 23:19:25.875353 | orchestrator | changed 2025-03-10 23:19:25.895832 | 2025-03-10 23:19:25.895945 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-10 23:19:36.397543 | orchestrator | ok 2025-03-10 23:19:36.409690 | 2025-03-10 23:19:36.409807 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-10 23:20:36.467713 | orchestrator | ok 2025-03-10 23:20:36.479549 | 2025-03-10 23:20:36.479734 | TASK [Fetch manager ssh hostkey] 2025-03-10 23:20:37.564149 | orchestrator | Output suppressed because no_log was given 2025-03-10 23:20:37.573126 | 2025-03-10 23:20:37.573244 | TASK [Get ssh keypair from terraform environment] 2025-03-10 23:20:38.115095 | orchestrator | changed 2025-03-10 23:20:38.132764 | 2025-03-10 23:20:38.132928 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-10 23:20:38.183678 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-03-10 23:20:38.194892 | 2025-03-10 23:20:38.195008 | TASK [Run manager part 0] 2025-03-10 23:20:39.020931 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-10 23:20:39.062122 | orchestrator | 2025-03-10 23:20:41.039161 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-03-10 23:20:41.039221 | orchestrator | 2025-03-10 23:20:41.039239 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-03-10 23:20:41.039255 | orchestrator | ok: [testbed-manager] 2025-03-10 23:20:42.920424 | orchestrator | 2025-03-10 23:20:42.920513 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-10 23:20:42.920528 | orchestrator | 2025-03-10 23:20:42.920534 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:20:42.920550 | orchestrator | ok: [testbed-manager] 2025-03-10 23:20:43.582311 | orchestrator | 2025-03-10 23:20:43.582406 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-10 23:20:43.582425 | orchestrator | ok: [testbed-manager] 2025-03-10 23:20:43.629613 | orchestrator | 2025-03-10 23:20:43.629638 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-10 23:20:43.629650 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:20:43.660885 | orchestrator | 2025-03-10 23:20:43.660907 | orchestrator | TASK [Update package cache] **************************************************** 2025-03-10 23:20:43.660917 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:20:43.685554 | orchestrator | 2025-03-10 23:20:43.685571 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-10 23:20:43.685580 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:20:43.709120 | orchestrator | 2025-03-10 23:20:43.709138 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-10 23:20:43.709147 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:20:43.733050 | orchestrator | 2025-03-10 23:20:43.733068 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-10 23:20:43.733078 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:20:43.770304 | orchestrator | 2025-03-10 23:20:43.770327 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-03-10 23:20:43.770337 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:20:43.798413 | orchestrator | 2025-03-10 23:20:43.798429 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-03-10 23:20:43.798439 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:20:44.636361 | orchestrator | 2025-03-10 23:20:44.636432 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-03-10 23:20:44.636448 | orchestrator | changed: [testbed-manager] 2025-03-10 23:23:17.599038 | orchestrator | 2025-03-10 23:23:17.599138 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-03-10 23:23:17.599203 | orchestrator | changed: [testbed-manager] 2025-03-10 23:24:32.395549 | orchestrator | 2025-03-10 23:24:32.395600 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-10 23:24:32.395620 | orchestrator | changed: [testbed-manager] 2025-03-10 23:24:56.645910 | orchestrator | 2025-03-10 23:24:56.645963 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-10 23:24:56.645980 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:06.953949 | orchestrator | 2025-03-10 23:25:06.954006 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-10 23:25:06.954077 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:07.002306 | orchestrator | 2025-03-10 23:25:07.002359 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-10 23:25:07.002385 | orchestrator | ok: [testbed-manager] 2025-03-10 23:25:07.811785 | orchestrator | 2025-03-10 23:25:07.811868 | orchestrator | TASK [Get current user] ******************************************************** 2025-03-10 23:25:07.811912 | orchestrator | ok: [testbed-manager] 2025-03-10 23:25:08.560188 | orchestrator | 2025-03-10 23:25:08.560258 | orchestrator | TASK [Create venv directory] *************************************************** 2025-03-10 23:25:08.560290 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:15.991572 | orchestrator | 2025-03-10 23:25:15.991707 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-03-10 23:25:15.991733 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:23.140742 | orchestrator | 2025-03-10 23:25:23.140867 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-03-10 23:25:23.140919 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:26.096603 | orchestrator | 2025-03-10 23:25:26.096685 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-03-10 23:25:26.096721 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:28.130229 | orchestrator | 2025-03-10 23:25:28.130336 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-03-10 23:25:28.130370 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:29.257260 | orchestrator | 2025-03-10 23:25:29.257361 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-03-10 23:25:29.257394 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-10 23:25:29.297716 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-10 23:25:29.297758 | orchestrator | 2025-03-10 23:25:29.297774 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-03-10 23:25:29.297797 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-10 23:25:32.417960 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-10 23:25:32.418081 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-10 23:25:32.418103 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-10 23:25:32.418189 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-10 23:25:33.018910 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-10 23:25:33.019008 | orchestrator | 2025-03-10 23:25:33.019025 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-03-10 23:25:33.019053 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:53.190991 | orchestrator | 2025-03-10 23:25:53.191051 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-03-10 23:25:53.191071 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-03-10 23:25:55.802475 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-03-10 23:25:55.802575 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-03-10 23:25:55.802594 | orchestrator | 2025-03-10 23:25:55.802613 | orchestrator | TASK [Install local collections] *********************************************** 2025-03-10 23:25:55.802644 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-03-10 23:25:57.194861 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-03-10 23:25:57.194901 | orchestrator | 2025-03-10 23:25:57.194908 | orchestrator | PLAY [Create operator user] **************************************************** 2025-03-10 23:25:57.194915 | orchestrator | 2025-03-10 23:25:57.194921 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:25:57.194933 | orchestrator | ok: [testbed-manager] 2025-03-10 23:25:57.242675 | orchestrator | 2025-03-10 23:25:57.242723 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-10 23:25:57.242740 | orchestrator | ok: [testbed-manager] 2025-03-10 23:25:57.333703 | orchestrator | 2025-03-10 23:25:57.333752 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-10 23:25:57.333769 | orchestrator | ok: [testbed-manager] 2025-03-10 23:25:58.072730 | orchestrator | 2025-03-10 23:25:58.073432 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-10 23:25:58.073466 | orchestrator | changed: [testbed-manager] 2025-03-10 23:25:58.828845 | orchestrator | 2025-03-10 23:25:58.828923 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-10 23:25:58.828953 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:00.279683 | orchestrator | 2025-03-10 23:26:00.279758 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-10 23:26:00.279787 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-03-10 23:26:01.675816 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-03-10 23:26:01.675914 | orchestrator | 2025-03-10 23:26:01.675934 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-10 23:26:01.675965 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:03.409839 | orchestrator | 2025-03-10 23:26:03.409891 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-10 23:26:03.409910 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:26:03.993191 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-03-10 23:26:03.993243 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:26:03.993254 | orchestrator | 2025-03-10 23:26:03.993264 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-10 23:26:03.993282 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:04.063070 | orchestrator | 2025-03-10 23:26:04.063192 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-10 23:26:04.063212 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:04.919067 | orchestrator | 2025-03-10 23:26:04.919177 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-10 23:26:04.919199 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:26:04.950857 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:04.950900 | orchestrator | 2025-03-10 23:26:04.950911 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-10 23:26:04.950926 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:04.980992 | orchestrator | 2025-03-10 23:26:04.981038 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-10 23:26:04.981055 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:05.016506 | orchestrator | 2025-03-10 23:26:05.016554 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-10 23:26:05.016571 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:05.064869 | orchestrator | 2025-03-10 23:26:05.064914 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-10 23:26:05.064932 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:05.784222 | orchestrator | 2025-03-10 23:26:05.784319 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-10 23:26:05.784353 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:07.206245 | orchestrator | 2025-03-10 23:26:07.206342 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-10 23:26:07.206362 | orchestrator | 2025-03-10 23:26:07.206377 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:26:07.206406 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:08.227878 | orchestrator | 2025-03-10 23:26:08.227922 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-03-10 23:26:08.227935 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:08.323890 | orchestrator | 2025-03-10 23:26:08.324296 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:26:08.324765 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-03-10 23:26:08.324772 | orchestrator | 2025-03-10 23:26:08.445223 | orchestrator | changed 2025-03-10 23:26:08.466362 | 2025-03-10 23:26:08.466514 | TASK [Point out that the log in on the manager is now possible] 2025-03-10 23:26:08.517293 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-03-10 23:26:08.527465 | 2025-03-10 23:26:08.527569 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-10 23:26:08.578513 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-03-10 23:26:08.589678 | 2025-03-10 23:26:08.589785 | TASK [Run manager part 1 + 2] 2025-03-10 23:26:09.389882 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-10 23:26:09.441476 | orchestrator | 2025-03-10 23:26:11.889474 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-03-10 23:26:11.889536 | orchestrator | 2025-03-10 23:26:11.889559 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:26:11.889576 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:11.925305 | orchestrator | 2025-03-10 23:26:11.925332 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-10 23:26:11.925345 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:11.962091 | orchestrator | 2025-03-10 23:26:11.962114 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-10 23:26:11.962123 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:11.996249 | orchestrator | 2025-03-10 23:26:11.996276 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-10 23:26:11.996287 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:12.067279 | orchestrator | 2025-03-10 23:26:12.067329 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-10 23:26:12.067343 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:12.124502 | orchestrator | 2025-03-10 23:26:12.124528 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-10 23:26:12.124539 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:12.162302 | orchestrator | 2025-03-10 23:26:12.162331 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-10 23:26:12.162344 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-03-10 23:26:12.893376 | orchestrator | 2025-03-10 23:26:12.893430 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-10 23:26:12.893447 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:12.937238 | orchestrator | 2025-03-10 23:26:12.937263 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-10 23:26:12.937273 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:14.318251 | orchestrator | 2025-03-10 23:26:14.318296 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-10 23:26:14.318317 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:14.947536 | orchestrator | 2025-03-10 23:26:14.947583 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-10 23:26:14.947601 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:16.093776 | orchestrator | 2025-03-10 23:26:16.093813 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-10 23:26:16.093827 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:29.960114 | orchestrator | 2025-03-10 23:26:29.960229 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-10 23:26:29.960265 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:30.635739 | orchestrator | 2025-03-10 23:26:30.635835 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-10 23:26:30.635868 | orchestrator | ok: [testbed-manager] 2025-03-10 23:26:30.682797 | orchestrator | 2025-03-10 23:26:30.682856 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-10 23:26:30.682883 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:31.659428 | orchestrator | 2025-03-10 23:26:31.659523 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-03-10 23:26:31.659558 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:32.655685 | orchestrator | 2025-03-10 23:26:32.656437 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-03-10 23:26:32.656477 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:33.271298 | orchestrator | 2025-03-10 23:26:33.271392 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-03-10 23:26:33.271425 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:33.308355 | orchestrator | 2025-03-10 23:26:33.308403 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-03-10 23:26:33.308418 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-10 23:26:35.543547 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-10 23:26:35.543635 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-10 23:26:35.543655 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-10 23:26:35.543685 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:46.442792 | orchestrator | 2025-03-10 23:26:46.442995 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-03-10 23:26:46.443035 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-03-10 23:26:47.522740 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-03-10 23:26:47.522794 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-03-10 23:26:47.522805 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-03-10 23:26:47.522815 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-03-10 23:26:47.522825 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-03-10 23:26:47.522834 | orchestrator | 2025-03-10 23:26:47.522843 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-03-10 23:26:47.522871 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:47.564421 | orchestrator | 2025-03-10 23:26:47.564472 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-03-10 23:26:47.564489 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:26:50.286355 | orchestrator | 2025-03-10 23:26:50.286429 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-03-10 23:26:50.286457 | orchestrator | changed: [testbed-manager] 2025-03-10 23:26:50.321715 | orchestrator | 2025-03-10 23:26:50.321805 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-03-10 23:26:50.321836 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:28:36.916176 | orchestrator | 2025-03-10 23:28:36.916223 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-03-10 23:28:36.916238 | orchestrator | changed: [testbed-manager] 2025-03-10 23:28:38.130830 | orchestrator | 2025-03-10 23:28:38.130876 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-10 23:28:38.130891 | orchestrator | ok: [testbed-manager] 2025-03-10 23:28:38.213871 | orchestrator | 2025-03-10 23:28:38.214088 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:28:38.214103 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-03-10 23:28:38.214109 | orchestrator | 2025-03-10 23:28:38.708241 | orchestrator | changed 2025-03-10 23:28:38.728062 | 2025-03-10 23:28:38.728203 | TASK [Reboot manager] 2025-03-10 23:28:40.271004 | orchestrator | changed 2025-03-10 23:28:40.286813 | 2025-03-10 23:28:40.287034 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-10 23:28:56.667738 | orchestrator | ok 2025-03-10 23:28:56.680486 | 2025-03-10 23:28:56.680619 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-10 23:29:56.729618 | orchestrator | ok 2025-03-10 23:29:56.740858 | 2025-03-10 23:29:56.741022 | TASK [Deploy manager + bootstrap nodes] 2025-03-10 23:29:59.326759 | orchestrator | 2025-03-10 23:29:59.329726 | orchestrator | # DEPLOY MANAGER 2025-03-10 23:29:59.329767 | orchestrator | 2025-03-10 23:29:59.329784 | orchestrator | + set -e 2025-03-10 23:29:59.329856 | orchestrator | + echo 2025-03-10 23:29:59.329877 | orchestrator | + echo '# DEPLOY MANAGER' 2025-03-10 23:29:59.329893 | orchestrator | + echo 2025-03-10 23:29:59.329918 | orchestrator | + cat /opt/manager-vars.sh 2025-03-10 23:29:59.329952 | orchestrator | export NUMBER_OF_NODES=6 2025-03-10 23:29:59.330110 | orchestrator | 2025-03-10 23:29:59.330135 | orchestrator | export CEPH_VERSION=quincy 2025-03-10 23:29:59.330149 | orchestrator | export CONFIGURATION_VERSION=main 2025-03-10 23:29:59.330164 | orchestrator | export MANAGER_VERSION=latest 2025-03-10 23:29:59.330178 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-03-10 23:29:59.330192 | orchestrator | 2025-03-10 23:29:59.330206 | orchestrator | export ARA=false 2025-03-10 23:29:59.330220 | orchestrator | export TEMPEST=false 2025-03-10 23:29:59.330234 | orchestrator | export IS_ZUUL=true 2025-03-10 23:29:59.330248 | orchestrator | 2025-03-10 23:29:59.330261 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2025-03-10 23:29:59.330276 | orchestrator | export EXTERNAL_API=false 2025-03-10 23:29:59.330290 | orchestrator | 2025-03-10 23:29:59.330303 | orchestrator | export IMAGE_USER=ubuntu 2025-03-10 23:29:59.330317 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-03-10 23:29:59.330332 | orchestrator | 2025-03-10 23:29:59.330346 | orchestrator | export CEPH_STACK=ceph-ansible 2025-03-10 23:29:59.330365 | orchestrator | 2025-03-10 23:29:59.331513 | orchestrator | + echo 2025-03-10 23:29:59.331536 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-10 23:29:59.331607 | orchestrator | ++ export INTERACTIVE=false 2025-03-10 23:29:59.331718 | orchestrator | ++ INTERACTIVE=false 2025-03-10 23:29:59.331736 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-10 23:29:59.331759 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-10 23:29:59.331773 | orchestrator | + source /opt/manager-vars.sh 2025-03-10 23:29:59.331787 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-10 23:29:59.331801 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-10 23:29:59.331814 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-10 23:29:59.331847 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-10 23:29:59.331866 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-10 23:29:59.386266 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-10 23:29:59.386326 | orchestrator | ++ export MANAGER_VERSION=latest 2025-03-10 23:29:59.386340 | orchestrator | ++ MANAGER_VERSION=latest 2025-03-10 23:29:59.386354 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-10 23:29:59.386369 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-10 23:29:59.386383 | orchestrator | ++ export ARA=false 2025-03-10 23:29:59.386397 | orchestrator | ++ ARA=false 2025-03-10 23:29:59.386411 | orchestrator | ++ export TEMPEST=false 2025-03-10 23:29:59.386424 | orchestrator | ++ TEMPEST=false 2025-03-10 23:29:59.386438 | orchestrator | ++ export IS_ZUUL=true 2025-03-10 23:29:59.386451 | orchestrator | ++ IS_ZUUL=true 2025-03-10 23:29:59.386465 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2025-03-10 23:29:59.386478 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2025-03-10 23:29:59.386499 | orchestrator | ++ export EXTERNAL_API=false 2025-03-10 23:29:59.386513 | orchestrator | ++ EXTERNAL_API=false 2025-03-10 23:29:59.386526 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-10 23:29:59.386540 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-10 23:29:59.386554 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-10 23:29:59.386567 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-10 23:29:59.386584 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-10 23:29:59.386598 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-10 23:29:59.386612 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-03-10 23:29:59.386641 | orchestrator | + docker version 2025-03-10 23:29:59.666886 | orchestrator | Client: Docker Engine - Community 2025-03-10 23:29:59.670680 | orchestrator | Version: 27.4.1 2025-03-10 23:29:59.670715 | orchestrator | API version: 1.47 2025-03-10 23:29:59.670729 | orchestrator | Go version: go1.22.10 2025-03-10 23:29:59.670743 | orchestrator | Git commit: b9d17ea 2025-03-10 23:29:59.670757 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-03-10 23:29:59.670773 | orchestrator | OS/Arch: linux/amd64 2025-03-10 23:29:59.670786 | orchestrator | Context: default 2025-03-10 23:29:59.670800 | orchestrator | 2025-03-10 23:29:59.670814 | orchestrator | Server: Docker Engine - Community 2025-03-10 23:29:59.670851 | orchestrator | Engine: 2025-03-10 23:29:59.670866 | orchestrator | Version: 27.4.1 2025-03-10 23:29:59.670880 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-03-10 23:29:59.670894 | orchestrator | Go version: go1.22.10 2025-03-10 23:29:59.670909 | orchestrator | Git commit: c710b88 2025-03-10 23:29:59.670949 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-03-10 23:29:59.670963 | orchestrator | OS/Arch: linux/amd64 2025-03-10 23:29:59.670977 | orchestrator | Experimental: false 2025-03-10 23:29:59.670991 | orchestrator | containerd: 2025-03-10 23:29:59.671004 | orchestrator | Version: 1.7.25 2025-03-10 23:29:59.671018 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-03-10 23:29:59.671032 | orchestrator | runc: 2025-03-10 23:29:59.671046 | orchestrator | Version: 1.2.4 2025-03-10 23:29:59.671059 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-03-10 23:29:59.671073 | orchestrator | docker-init: 2025-03-10 23:29:59.671087 | orchestrator | Version: 0.19.0 2025-03-10 23:29:59.671101 | orchestrator | GitCommit: de40ad0 2025-03-10 23:29:59.671121 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-03-10 23:29:59.680298 | orchestrator | + set -e 2025-03-10 23:29:59.680355 | orchestrator | + source /opt/manager-vars.sh 2025-03-10 23:29:59.680372 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-10 23:29:59.680386 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-10 23:29:59.680404 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-10 23:29:59.680655 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-10 23:29:59.680678 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-10 23:29:59.680692 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-10 23:29:59.680707 | orchestrator | ++ export MANAGER_VERSION=latest 2025-03-10 23:29:59.680721 | orchestrator | ++ MANAGER_VERSION=latest 2025-03-10 23:29:59.680763 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-10 23:29:59.680787 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-10 23:29:59.680801 | orchestrator | ++ export ARA=false 2025-03-10 23:29:59.680815 | orchestrator | ++ ARA=false 2025-03-10 23:29:59.680855 | orchestrator | ++ export TEMPEST=false 2025-03-10 23:29:59.680870 | orchestrator | ++ TEMPEST=false 2025-03-10 23:29:59.680885 | orchestrator | ++ export IS_ZUUL=true 2025-03-10 23:29:59.680898 | orchestrator | ++ IS_ZUUL=true 2025-03-10 23:29:59.680913 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2025-03-10 23:29:59.680927 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2025-03-10 23:29:59.680941 | orchestrator | ++ export EXTERNAL_API=false 2025-03-10 23:29:59.680968 | orchestrator | ++ EXTERNAL_API=false 2025-03-10 23:29:59.680988 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-10 23:29:59.681286 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-10 23:29:59.681308 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-10 23:29:59.681322 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-10 23:29:59.681336 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-10 23:29:59.681350 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-10 23:29:59.681363 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-10 23:29:59.681377 | orchestrator | ++ export INTERACTIVE=false 2025-03-10 23:29:59.681397 | orchestrator | ++ INTERACTIVE=false 2025-03-10 23:29:59.681411 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-10 23:29:59.681425 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-10 23:29:59.681443 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-03-10 23:29:59.688189 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-10 23:29:59.688216 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-03-10 23:29:59.688236 | orchestrator | + set -e 2025-03-10 23:29:59.689296 | orchestrator | + VERSION=quincy 2025-03-10 23:29:59.689332 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-03-10 23:29:59.696635 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-03-10 23:29:59.700580 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-03-10 23:29:59.700614 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-03-10 23:29:59.707800 | orchestrator | + set -e 2025-03-10 23:29:59.708409 | orchestrator | + VERSION=2024.1 2025-03-10 23:29:59.709203 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-03-10 23:29:59.713919 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-03-10 23:29:59.720530 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-03-10 23:29:59.720559 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-03-10 23:29:59.721407 | orchestrator | ++ semver latest 7.0.0 2025-03-10 23:29:59.788848 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-10 23:29:59.789042 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-10 23:29:59.789062 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-03-10 23:29:59.789082 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-03-10 23:29:59.839615 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-10 23:29:59.842608 | orchestrator | + source /opt/venv/bin/activate 2025-03-10 23:29:59.845178 | orchestrator | ++ deactivate nondestructive 2025-03-10 23:29:59.845422 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:29:59.845443 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:29:59.845459 | orchestrator | ++ hash -r 2025-03-10 23:29:59.845474 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:29:59.845488 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-10 23:29:59.845502 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-10 23:29:59.845543 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-10 23:29:59.845563 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-10 23:29:59.845663 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-10 23:29:59.845680 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-10 23:29:59.845694 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-10 23:29:59.845709 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:29:59.845724 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:29:59.845738 | orchestrator | ++ export PATH 2025-03-10 23:29:59.845753 | orchestrator | ++ '[' -n '' ']' 2025-03-10 23:29:59.845767 | orchestrator | ++ '[' -z '' ']' 2025-03-10 23:29:59.845780 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-10 23:29:59.845794 | orchestrator | ++ PS1='(venv) ' 2025-03-10 23:29:59.845808 | orchestrator | ++ export PS1 2025-03-10 23:29:59.845822 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-10 23:29:59.845860 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-10 23:29:59.845878 | orchestrator | ++ hash -r 2025-03-10 23:30:01.357233 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-03-10 23:30:01.357370 | orchestrator | 2025-03-10 23:30:02.134220 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-03-10 23:30:02.134333 | orchestrator | 2025-03-10 23:30:02.134352 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-10 23:30:02.134386 | orchestrator | ok: [testbed-manager] 2025-03-10 23:30:03.284692 | orchestrator | 2025-03-10 23:30:03.284808 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-10 23:30:03.284862 | orchestrator | changed: [testbed-manager] 2025-03-10 23:30:06.106853 | orchestrator | 2025-03-10 23:30:06.106973 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-03-10 23:30:06.106989 | orchestrator | 2025-03-10 23:30:06.107003 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:30:06.107033 | orchestrator | ok: [testbed-manager] 2025-03-10 23:30:13.279296 | orchestrator | 2025-03-10 23:30:13.279459 | orchestrator | TASK [Pull images] ************************************************************* 2025-03-10 23:30:13.279506 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-03-10 23:31:10.157972 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.7.2) 2025-03-10 23:31:10.158154 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:quincy) 2025-03-10 23:31:10.158176 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-03-10 23:31:10.158191 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.1) 2025-03-10 23:31:10.158206 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-03-10 23:31:10.158220 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.10) 2025-03-10 23:31:10.158234 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-03-10 23:31:10.158248 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-03-10 23:31:10.158262 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-netbox:latest) 2025-03-10 23:31:10.158275 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.8-alpine) 2025-03-10 23:31:10.158289 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.4) 2025-03-10 23:31:10.158303 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.4) 2025-03-10 23:31:10.158339 | orchestrator | 2025-03-10 23:31:10.158355 | orchestrator | TASK [Check status] ************************************************************ 2025-03-10 23:31:10.158388 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-10 23:31:10.158405 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-03-10 23:31:10.158419 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-03-10 23:31:10.158433 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-03-10 23:31:10.158448 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j672604452088.1521', 'results_file': '/home/dragon/.ansible_async/j672604452088.1521', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158475 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j664987378739.1548', 'results_file': '/home/dragon/.ansible_async/j664987378739.1548', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158497 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j478023632867.1573', 'results_file': '/home/dragon/.ansible_async/j478023632867.1573', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158513 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j115126282362.1606', 'results_file': '/home/dragon/.ansible_async/j115126282362.1606', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158534 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-10 23:31:10.158549 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j165892778538.1638', 'results_file': '/home/dragon/.ansible_async/j165892778538.1638', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158565 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j195207486576.1670', 'results_file': '/home/dragon/.ansible_async/j195207486576.1670', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158580 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j90643727074.1705', 'results_file': '/home/dragon/.ansible_async/j90643727074.1705', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.10', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158599 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j420173335755.1738', 'results_file': '/home/dragon/.ansible_async/j420173335755.1738', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158615 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j640072088667.1772', 'results_file': '/home/dragon/.ansible_async/j640072088667.1772', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158631 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j961714429949.1804', 'results_file': '/home/dragon/.ansible_async/j961714429949.1804', 'changed': True, 'item': 'registry.osism.tech/osism/osism-netbox:latest', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158647 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j618547339559.1843', 'results_file': '/home/dragon/.ansible_async/j618547339559.1843', 'changed': True, 'item': 'index.docker.io/library/postgres:16.8-alpine', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158671 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j753576535819.1869', 'results_file': '/home/dragon/.ansible_async/j753576535819.1869', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.4', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158712 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j82046360513.1902', 'results_file': '/home/dragon/.ansible_async/j82046360513.1902', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.4', 'ansible_loop_var': 'item'}) 2025-03-10 23:31:10.158728 | orchestrator | 2025-03-10 23:31:10.158754 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-03-10 23:31:10.199756 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:10.713104 | orchestrator | 2025-03-10 23:31:10.713210 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-03-10 23:31:10.713243 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:11.106235 | orchestrator | 2025-03-10 23:31:11.106345 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-03-10 23:31:11.106380 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:11.460148 | orchestrator | 2025-03-10 23:31:11.460261 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-10 23:31:11.460317 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:11.506238 | orchestrator | 2025-03-10 23:31:11.506360 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-03-10 23:31:11.506399 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:31:11.855920 | orchestrator | 2025-03-10 23:31:11.856010 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-03-10 23:31:11.856041 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:12.031983 | orchestrator | 2025-03-10 23:31:12.032083 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-03-10 23:31:12.032114 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:31:14.186794 | orchestrator | 2025-03-10 23:31:14.186925 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-03-10 23:31:14.186958 | orchestrator | 2025-03-10 23:31:14.186973 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:31:14.187001 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:14.455797 | orchestrator | 2025-03-10 23:31:14.455901 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-03-10 23:31:14.455932 | orchestrator | 2025-03-10 23:31:14.574219 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-03-10 23:31:14.574315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-03-10 23:31:15.806370 | orchestrator | 2025-03-10 23:31:15.806488 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-03-10 23:31:15.806523 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-03-10 23:31:17.884128 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-03-10 23:31:17.884255 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-03-10 23:31:17.884285 | orchestrator | 2025-03-10 23:31:17.884313 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-03-10 23:31:17.884347 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-03-10 23:31:18.646742 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-03-10 23:31:18.646841 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-03-10 23:31:18.646856 | orchestrator | 2025-03-10 23:31:18.646870 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-03-10 23:31:18.646898 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:31:19.418230 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:19.418355 | orchestrator | 2025-03-10 23:31:19.418400 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-03-10 23:31:19.418432 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:31:19.501896 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:19.501965 | orchestrator | 2025-03-10 23:31:19.501982 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-03-10 23:31:19.502008 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:31:19.936050 | orchestrator | 2025-03-10 23:31:19.936156 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-03-10 23:31:19.936182 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:20.101547 | orchestrator | 2025-03-10 23:31:20.101698 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-03-10 23:31:20.101736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-03-10 23:31:21.425258 | orchestrator | 2025-03-10 23:31:21.425369 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-03-10 23:31:21.425405 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:22.392996 | orchestrator | 2025-03-10 23:31:22.393121 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-03-10 23:31:22.393158 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:25.618997 | orchestrator | 2025-03-10 23:31:25.619124 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-03-10 23:31:25.619162 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:25.917616 | orchestrator | 2025-03-10 23:31:25.917743 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-03-10 23:31:25.917780 | orchestrator | 2025-03-10 23:31:26.061759 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-03-10 23:31:26.061853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-03-10 23:31:28.978342 | orchestrator | 2025-03-10 23:31:28.978459 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-03-10 23:31:28.978496 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:29.182241 | orchestrator | 2025-03-10 23:31:29.182343 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-10 23:31:29.182377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-03-10 23:31:30.392722 | orchestrator | 2025-03-10 23:31:30.392827 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-03-10 23:31:30.392862 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-03-10 23:31:30.523540 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-03-10 23:31:30.523608 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-03-10 23:31:30.523623 | orchestrator | 2025-03-10 23:31:30.523682 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-03-10 23:31:30.523711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-03-10 23:31:31.211542 | orchestrator | 2025-03-10 23:31:31.211708 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-03-10 23:31:31.211744 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-03-10 23:31:31.941760 | orchestrator | 2025-03-10 23:31:31.941866 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-10 23:31:31.941903 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:31:32.391253 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:32.391345 | orchestrator | 2025-03-10 23:31:32.391362 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-03-10 23:31:32.391393 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:32.770454 | orchestrator | 2025-03-10 23:31:32.770533 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-03-10 23:31:32.770563 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:32.836118 | orchestrator | 2025-03-10 23:31:32.836221 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-03-10 23:31:32.836254 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:31:33.571526 | orchestrator | 2025-03-10 23:31:33.571693 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-03-10 23:31:33.571758 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:33.700115 | orchestrator | 2025-03-10 23:31:33.700199 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-10 23:31:33.700229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-03-10 23:31:34.526860 | orchestrator | 2025-03-10 23:31:34.526963 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-03-10 23:31:34.527008 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-03-10 23:31:35.412849 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-03-10 23:31:35.412963 | orchestrator | 2025-03-10 23:31:35.412982 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-03-10 23:31:35.413016 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-03-10 23:31:36.152570 | orchestrator | 2025-03-10 23:31:36.152745 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-03-10 23:31:36.152780 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:36.210496 | orchestrator | 2025-03-10 23:31:36.211234 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-03-10 23:31:36.211276 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:31:36.934427 | orchestrator | 2025-03-10 23:31:36.934532 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-03-10 23:31:36.934565 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:38.959417 | orchestrator | 2025-03-10 23:31:38.959540 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-10 23:31:38.959580 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:31:45.685053 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:31:45.685150 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:31:45.685160 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:45.685169 | orchestrator | 2025-03-10 23:31:45.685177 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-03-10 23:31:45.685197 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-03-10 23:31:46.434918 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-03-10 23:31:46.436161 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-03-10 23:31:46.436272 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-03-10 23:31:46.436290 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-03-10 23:31:46.436306 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-03-10 23:31:46.436321 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-03-10 23:31:46.436335 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-03-10 23:31:46.436349 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-03-10 23:31:46.436363 | orchestrator | changed: [testbed-manager] => (item=users) 2025-03-10 23:31:46.436377 | orchestrator | 2025-03-10 23:31:46.436392 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-03-10 23:31:46.436434 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-03-10 23:31:46.628086 | orchestrator | 2025-03-10 23:31:46.628189 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-03-10 23:31:46.628228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-03-10 23:31:47.375755 | orchestrator | 2025-03-10 23:31:47.375879 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-03-10 23:31:47.375918 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:48.108538 | orchestrator | 2025-03-10 23:31:48.108709 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-03-10 23:31:48.108749 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:48.933462 | orchestrator | 2025-03-10 23:31:48.933556 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-03-10 23:31:48.933582 | orchestrator | changed: [testbed-manager] 2025-03-10 23:31:51.318686 | orchestrator | 2025-03-10 23:31:51.318815 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-03-10 23:31:51.318855 | orchestrator | ok: [testbed-manager] 2025-03-10 23:31:52.338368 | orchestrator | 2025-03-10 23:31:52.338496 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-03-10 23:31:52.338534 | orchestrator | ok: [testbed-manager] 2025-03-10 23:32:14.789144 | orchestrator | 2025-03-10 23:32:14.789269 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-03-10 23:32:14.789303 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-03-10 23:32:14.880113 | orchestrator | ok: [testbed-manager] 2025-03-10 23:32:14.880183 | orchestrator | 2025-03-10 23:32:14.880203 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-03-10 23:32:14.880233 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:14.947356 | orchestrator | 2025-03-10 23:32:14.948073 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-03-10 23:32:14.948104 | orchestrator | 2025-03-10 23:32:14.948120 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-03-10 23:32:14.948148 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:15.054218 | orchestrator | 2025-03-10 23:32:15.054295 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-10 23:32:15.054325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-03-10 23:32:16.062643 | orchestrator | 2025-03-10 23:32:16.062717 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-03-10 23:32:16.062746 | orchestrator | ok: [testbed-manager] 2025-03-10 23:32:16.169341 | orchestrator | 2025-03-10 23:32:16.169399 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-03-10 23:32:16.169427 | orchestrator | ok: [testbed-manager] 2025-03-10 23:32:16.252009 | orchestrator | 2025-03-10 23:32:16.252061 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-03-10 23:32:16.252088 | orchestrator | ok: [testbed-manager] => { 2025-03-10 23:32:17.213214 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-03-10 23:32:17.213337 | orchestrator | } 2025-03-10 23:32:17.213357 | orchestrator | 2025-03-10 23:32:17.213374 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-03-10 23:32:17.213404 | orchestrator | ok: [testbed-manager] 2025-03-10 23:32:18.381688 | orchestrator | 2025-03-10 23:32:18.381804 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-03-10 23:32:18.381839 | orchestrator | ok: [testbed-manager] 2025-03-10 23:32:18.471147 | orchestrator | 2025-03-10 23:32:18.471215 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-03-10 23:32:18.471243 | orchestrator | ok: [testbed-manager] 2025-03-10 23:32:18.532683 | orchestrator | 2025-03-10 23:32:18.532748 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-03-10 23:32:18.532776 | orchestrator | ok: [testbed-manager] => { 2025-03-10 23:32:18.598534 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-03-10 23:32:18.598666 | orchestrator | } 2025-03-10 23:32:18.598686 | orchestrator | 2025-03-10 23:32:18.598701 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-03-10 23:32:18.598732 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:18.681310 | orchestrator | 2025-03-10 23:32:18.681387 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-03-10 23:32:18.681408 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:18.754539 | orchestrator | 2025-03-10 23:32:18.754702 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-03-10 23:32:18.754740 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:18.821202 | orchestrator | 2025-03-10 23:32:18.821247 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-03-10 23:32:18.821273 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:18.887727 | orchestrator | 2025-03-10 23:32:18.887796 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-03-10 23:32:18.887823 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:19.005220 | orchestrator | 2025-03-10 23:32:19.005294 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-03-10 23:32:19.005323 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:32:20.718365 | orchestrator | 2025-03-10 23:32:20.718486 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-10 23:32:20.718534 | orchestrator | changed: [testbed-manager] 2025-03-10 23:32:20.841273 | orchestrator | 2025-03-10 23:32:20.841370 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-03-10 23:32:20.841396 | orchestrator | ok: [testbed-manager] 2025-03-10 23:33:20.918078 | orchestrator | 2025-03-10 23:33:20.918218 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-03-10 23:33:20.918254 | orchestrator | Pausing for 60 seconds 2025-03-10 23:33:21.058342 | orchestrator | changed: [testbed-manager] 2025-03-10 23:33:21.058476 | orchestrator | 2025-03-10 23:33:21.058493 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-03-10 23:33:21.058572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-03-10 23:37:25.014136 | orchestrator | 2025-03-10 23:37:25.014253 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-03-10 23:37:25.014317 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-03-10 23:37:27.612587 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-03-10 23:37:27.612676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-03-10 23:37:27.612694 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-03-10 23:37:27.612709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-03-10 23:37:27.612723 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-03-10 23:37:27.612737 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-03-10 23:37:27.612751 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-03-10 23:37:27.612765 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-03-10 23:37:27.612778 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-03-10 23:37:27.612792 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-03-10 23:37:27.612806 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-03-10 23:37:27.612819 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-03-10 23:37:27.612833 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-03-10 23:37:27.612847 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-03-10 23:37:27.612861 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-03-10 23:37:27.612874 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-03-10 23:37:27.612888 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-03-10 23:37:27.612901 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-03-10 23:37:27.612915 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-03-10 23:37:27.612928 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-03-10 23:37:27.612967 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-03-10 23:37:27.612982 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-03-10 23:37:27.612996 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:27.613011 | orchestrator | 2025-03-10 23:37:27.613026 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-03-10 23:37:27.613040 | orchestrator | 2025-03-10 23:37:27.613055 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:37:27.613083 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:27.785959 | orchestrator | 2025-03-10 23:37:27.786077 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-03-10 23:37:27.786111 | orchestrator | 2025-03-10 23:37:27.877383 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-03-10 23:37:27.877487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-03-10 23:37:29.930474 | orchestrator | 2025-03-10 23:37:29.930518 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-03-10 23:37:29.930532 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:29.980618 | orchestrator | 2025-03-10 23:37:29.980640 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-03-10 23:37:29.980649 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:30.121282 | orchestrator | 2025-03-10 23:37:30.121320 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-03-10 23:37:30.121331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-03-10 23:37:33.195575 | orchestrator | 2025-03-10 23:37:33.195625 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-03-10 23:37:33.195638 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-03-10 23:37:33.917307 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-03-10 23:37:33.917352 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-03-10 23:37:33.917357 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-03-10 23:37:33.917361 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-03-10 23:37:33.917365 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-03-10 23:37:33.917370 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-03-10 23:37:33.917374 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-03-10 23:37:33.917379 | orchestrator | 2025-03-10 23:37:33.917384 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-03-10 23:37:33.917395 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:34.009706 | orchestrator | 2025-03-10 23:37:34.009727 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-03-10 23:37:34.009737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-03-10 23:37:35.322744 | orchestrator | 2025-03-10 23:37:35.322802 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-03-10 23:37:35.322815 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-03-10 23:37:36.036197 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-03-10 23:37:36.036391 | orchestrator | 2025-03-10 23:37:36.036412 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-03-10 23:37:36.036445 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:36.106489 | orchestrator | 2025-03-10 23:37:36.106534 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-03-10 23:37:36.106558 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:37:36.182095 | orchestrator | 2025-03-10 23:37:36.182161 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-03-10 23:37:36.182188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-03-10 23:37:37.675083 | orchestrator | 2025-03-10 23:37:37.675165 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-03-10 23:37:37.675223 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:37:38.381785 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:37:38.381833 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:38.381839 | orchestrator | 2025-03-10 23:37:38.381844 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-03-10 23:37:38.381855 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:38.499847 | orchestrator | 2025-03-10 23:37:38.499921 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-03-10 23:37:38.499952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-03-10 23:37:39.244853 | orchestrator | 2025-03-10 23:37:39.244966 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-03-10 23:37:39.245003 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:37:39.913464 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:39.913553 | orchestrator | 2025-03-10 23:37:39.913569 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-03-10 23:37:39.913597 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:40.046818 | orchestrator | 2025-03-10 23:37:40.046884 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-03-10 23:37:40.046908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-03-10 23:37:40.747370 | orchestrator | 2025-03-10 23:37:40.747465 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-03-10 23:37:40.747499 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:44.215815 | orchestrator | 2025-03-10 23:37:44.215925 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-03-10 23:37:44.215960 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:45.587098 | orchestrator | 2025-03-10 23:37:45.587216 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-03-10 23:37:45.587330 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-03-10 23:37:46.350090 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-03-10 23:37:46.350193 | orchestrator | 2025-03-10 23:37:46.350211 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-03-10 23:37:46.350290 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:46.722868 | orchestrator | 2025-03-10 23:37:46.722978 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-03-10 23:37:46.723015 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:46.827554 | orchestrator | 2025-03-10 23:37:46.827650 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-03-10 23:37:46.827681 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:37:47.589076 | orchestrator | 2025-03-10 23:37:47.589179 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-03-10 23:37:47.589211 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:47.675813 | orchestrator | 2025-03-10 23:37:47.675847 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-03-10 23:37:47.675882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-03-10 23:37:47.745020 | orchestrator | 2025-03-10 23:37:47.745053 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-03-10 23:37:47.745074 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:50.047126 | orchestrator | 2025-03-10 23:37:50.047235 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-03-10 23:37:50.047294 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-03-10 23:37:50.810008 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-03-10 23:37:50.810180 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-03-10 23:37:50.811565 | orchestrator | 2025-03-10 23:37:50.811608 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-03-10 23:37:50.811656 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:50.898815 | orchestrator | 2025-03-10 23:37:50.898893 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-03-10 23:37:50.898926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-03-10 23:37:50.947533 | orchestrator | 2025-03-10 23:37:50.947646 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-03-10 23:37:50.947681 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:51.735203 | orchestrator | 2025-03-10 23:37:51.735375 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-03-10 23:37:51.735414 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-03-10 23:37:51.824050 | orchestrator | 2025-03-10 23:37:51.824149 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-03-10 23:37:51.824182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-03-10 23:37:52.640108 | orchestrator | 2025-03-10 23:37:52.640215 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-03-10 23:37:52.640296 | orchestrator | changed: [testbed-manager] 2025-03-10 23:37:53.359869 | orchestrator | 2025-03-10 23:37:53.359970 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-03-10 23:37:53.360003 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:53.428044 | orchestrator | 2025-03-10 23:37:53.428086 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-03-10 23:37:53.428110 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:37:53.488340 | orchestrator | 2025-03-10 23:37:53.488374 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-03-10 23:37:53.488395 | orchestrator | ok: [testbed-manager] 2025-03-10 23:37:54.450943 | orchestrator | 2025-03-10 23:37:54.451053 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-03-10 23:37:54.451087 | orchestrator | changed: [testbed-manager] 2025-03-10 23:38:15.482897 | orchestrator | 2025-03-10 23:38:15.483032 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-03-10 23:38:15.483070 | orchestrator | changed: [testbed-manager] 2025-03-10 23:38:16.245204 | orchestrator | 2025-03-10 23:38:16.245334 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-03-10 23:38:16.245353 | orchestrator | ok: [testbed-manager] 2025-03-10 23:38:19.424205 | orchestrator | 2025-03-10 23:38:19.424362 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-03-10 23:38:19.424396 | orchestrator | changed: [testbed-manager] 2025-03-10 23:38:19.492579 | orchestrator | 2025-03-10 23:38:19.492625 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-03-10 23:38:19.492652 | orchestrator | ok: [testbed-manager] 2025-03-10 23:38:19.568555 | orchestrator | 2025-03-10 23:38:19.568597 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-10 23:38:19.568612 | orchestrator | 2025-03-10 23:38:19.568626 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-03-10 23:38:19.568647 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:39:19.621636 | orchestrator | 2025-03-10 23:39:19.621769 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-03-10 23:39:19.621808 | orchestrator | Pausing for 60 seconds 2025-03-10 23:39:26.855196 | orchestrator | changed: [testbed-manager] 2025-03-10 23:39:26.855329 | orchestrator | 2025-03-10 23:39:26.855348 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-03-10 23:39:26.855382 | orchestrator | changed: [testbed-manager] 2025-03-10 23:40:09.056487 | orchestrator | 2025-03-10 23:40:09.056612 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-03-10 23:40:09.056646 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-03-10 23:40:16.170221 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-03-10 23:40:16.170335 | orchestrator | changed: [testbed-manager] 2025-03-10 23:40:16.170352 | orchestrator | 2025-03-10 23:40:16.170365 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-03-10 23:40:16.170422 | orchestrator | changed: [testbed-manager] 2025-03-10 23:40:16.281427 | orchestrator | 2025-03-10 23:40:16.281515 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-03-10 23:40:16.281547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-03-10 23:40:16.359490 | orchestrator | 2025-03-10 23:40:16.359564 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-10 23:40:16.359579 | orchestrator | 2025-03-10 23:40:16.359592 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-03-10 23:40:16.359617 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:40:16.553789 | orchestrator | 2025-03-10 23:40:16.553863 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:40:16.553878 | orchestrator | testbed-manager : ok=103 changed=54 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-03-10 23:40:16.553891 | orchestrator | 2025-03-10 23:40:16.553917 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-10 23:40:16.560780 | orchestrator | + deactivate 2025-03-10 23:40:16.560829 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-10 23:40:16.560844 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-10 23:40:16.560857 | orchestrator | + export PATH 2025-03-10 23:40:16.560870 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-10 23:40:16.560882 | orchestrator | + '[' -n '' ']' 2025-03-10 23:40:16.560895 | orchestrator | + hash -r 2025-03-10 23:40:16.560907 | orchestrator | + '[' -n '' ']' 2025-03-10 23:40:16.560919 | orchestrator | + unset VIRTUAL_ENV 2025-03-10 23:40:16.560931 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-10 23:40:16.560944 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-10 23:40:16.560956 | orchestrator | + unset -f deactivate 2025-03-10 23:40:16.560970 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-03-10 23:40:16.561005 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-10 23:40:16.561515 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-10 23:40:16.561543 | orchestrator | + local max_attempts=60 2025-03-10 23:40:16.561559 | orchestrator | + local name=ceph-ansible 2025-03-10 23:40:16.561573 | orchestrator | + local attempt_num=1 2025-03-10 23:40:16.561597 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-10 23:40:16.591908 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-10 23:40:16.592766 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-10 23:40:16.592801 | orchestrator | + local max_attempts=60 2025-03-10 23:40:16.592815 | orchestrator | + local name=kolla-ansible 2025-03-10 23:40:16.592829 | orchestrator | + local attempt_num=1 2025-03-10 23:40:16.592850 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-10 23:40:16.619523 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-10 23:40:16.620599 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-10 23:40:16.620640 | orchestrator | + local max_attempts=60 2025-03-10 23:40:16.620658 | orchestrator | + local name=osism-ansible 2025-03-10 23:40:16.620672 | orchestrator | + local attempt_num=1 2025-03-10 23:40:16.620694 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-10 23:40:16.649694 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-10 23:40:18.064009 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-10 23:40:18.064141 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-10 23:40:18.064195 | orchestrator | ++ semver latest 8.0.0 2025-03-10 23:40:18.104323 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-10 23:40:18.105031 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-10 23:40:18.105074 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-03-10 23:40:18.105091 | orchestrator | + local max_attempts=60 2025-03-10 23:40:18.105138 | orchestrator | + local name=netbox-netbox-1 2025-03-10 23:40:18.105154 | orchestrator | + local attempt_num=1 2025-03-10 23:40:18.105178 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-03-10 23:40:18.132943 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-10 23:40:18.139202 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-03-10 23:40:18.139270 | orchestrator | + set -e 2025-03-10 23:40:20.019113 | orchestrator | + osism netbox import 2025-03-10 23:40:20.019248 | orchestrator | 2025-03-10 23:40:20 | INFO  | Task 963e58a1-8d93-431d-a03a-a087aebee0c6 is running. Wait. No more output. 2025-03-10 23:40:24.625009 | orchestrator | + osism netbox init 2025-03-10 23:40:26.418185 | orchestrator | 2025-03-10 23:40:26 | INFO  | Task cc4ea89e-ae49-41ff-8c30-2512ff637ecb was prepared for execution. 2025-03-10 23:40:28.464671 | orchestrator | 2025-03-10 23:40:26 | INFO  | It takes a moment until task cc4ea89e-ae49-41ff-8c30-2512ff637ecb has been started and output is visible here. 2025-03-10 23:40:28.464765 | orchestrator | 2025-03-10 23:40:28.466609 | orchestrator | PLAY [Wait for netbox service] ************************************************* 2025-03-10 23:40:28.466638 | orchestrator | 2025-03-10 23:40:28.466648 | orchestrator | TASK [Wait for netbox service] ************************************************* 2025-03-10 23:40:29.530007 | orchestrator | [WARNING]: Platform linux on host localhost is using the discovered Python 2025-03-10 23:40:29.530329 | orchestrator | interpreter at /usr/local/bin/python3.13, but future installation of another 2025-03-10 23:40:29.531353 | orchestrator | Python interpreter could change the meaning of that path. See 2025-03-10 23:40:29.531781 | orchestrator | https://docs.ansible.com/ansible- 2025-03-10 23:40:29.532549 | orchestrator | core/2.18/reference_appendices/interpreter_discovery.html for more information. 2025-03-10 23:40:29.535764 | orchestrator | ok: [localhost] 2025-03-10 23:40:29.536009 | orchestrator | 2025-03-10 23:40:29.536630 | orchestrator | PLAY [Manage sites and locations] ********************************************** 2025-03-10 23:40:29.537209 | orchestrator | 2025-03-10 23:40:29.537598 | orchestrator | TASK [Manage Discworld site] *************************************************** 2025-03-10 23:40:31.197413 | orchestrator | changed: [localhost] 2025-03-10 23:40:33.169453 | orchestrator | 2025-03-10 23:40:33.169568 | orchestrator | TASK [Manage Ankh-Morpork location] ******************************************** 2025-03-10 23:40:33.169603 | orchestrator | changed: [localhost] 2025-03-10 23:40:33.172286 | orchestrator | 2025-03-10 23:40:33.172395 | orchestrator | PLAY [Manage IP prefixes] ****************************************************** 2025-03-10 23:40:41.460388 | orchestrator | 2025-03-10 23:40:41.460636 | orchestrator | TASK [Manage 192.168.16.0/20] ************************************************** 2025-03-10 23:40:41.460688 | orchestrator | changed: [localhost] 2025-03-10 23:40:42.999766 | orchestrator | 2025-03-10 23:40:42.999877 | orchestrator | TASK [Manage 192.168.112.0/20] ************************************************* 2025-03-10 23:40:42.999915 | orchestrator | changed: [localhost] 2025-03-10 23:40:43.000412 | orchestrator | 2025-03-10 23:40:43.000443 | orchestrator | PLAY [Manage IP addresses] ***************************************************** 2025-03-10 23:40:43.000460 | orchestrator | 2025-03-10 23:40:43.000482 | orchestrator | TASK [Manage api.testbed.osism.xyz IP address] ********************************* 2025-03-10 23:40:44.571928 | orchestrator | changed: [localhost] 2025-03-10 23:40:44.572613 | orchestrator | 2025-03-10 23:40:44.572658 | orchestrator | TASK [Manage api-int.testbed.osism.xyz IP address] ***************************** 2025-03-10 23:40:45.944701 | orchestrator | changed: [localhost] 2025-03-10 23:40:45.945892 | orchestrator | 2025-03-10 23:40:45.947672 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:40:45.948327 | orchestrator | 2025-03-10 23:40:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:40:45.948364 | orchestrator | 2025-03-10 23:40:45 | INFO  | Please wait and do not abort execution. 2025-03-10 23:40:45.948382 | orchestrator | localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:40:45.948405 | orchestrator | 2025-03-10 23:40:46.358688 | orchestrator | + osism netbox manage 1000 2025-03-10 23:40:47.962007 | orchestrator | 2025-03-10 23:40:47 | INFO  | Task f216f403-aa3f-4bb0-874b-d7b4c77e0580 was prepared for execution. 2025-03-10 23:40:49.887372 | orchestrator | 2025-03-10 23:40:47 | INFO  | It takes a moment until task f216f403-aa3f-4bb0-874b-d7b4c77e0580 has been started and output is visible here. 2025-03-10 23:40:49.887503 | orchestrator | 2025-03-10 23:40:49.887997 | orchestrator | PLAY [Manage rack 1000] ******************************************************** 2025-03-10 23:40:49.888062 | orchestrator | 2025-03-10 23:40:49.888957 | orchestrator | TASK [Manage rack 1000] ******************************************************** 2025-03-10 23:40:52.169706 | orchestrator | changed: [localhost] 2025-03-10 23:40:59.699349 | orchestrator | 2025-03-10 23:40:59.699467 | orchestrator | TASK [Manage testbed-switch-0] ************************************************* 2025-03-10 23:40:59.699499 | orchestrator | changed: [localhost] 2025-03-10 23:41:07.032577 | orchestrator | 2025-03-10 23:41:07.032708 | orchestrator | TASK [Manage testbed-switch-1] ************************************************* 2025-03-10 23:41:07.032747 | orchestrator | changed: [localhost] 2025-03-10 23:41:07.033006 | orchestrator | 2025-03-10 23:41:13.947604 | orchestrator | TASK [Manage testbed-switch-2] ************************************************* 2025-03-10 23:41:13.947740 | orchestrator | changed: [localhost] 2025-03-10 23:41:13.948116 | orchestrator | 2025-03-10 23:41:13.948148 | orchestrator | TASK [Manage testbed-manager] ************************************************** 2025-03-10 23:41:17.409842 | orchestrator | changed: [localhost] 2025-03-10 23:41:17.410167 | orchestrator | 2025-03-10 23:41:17.410205 | orchestrator | TASK [Manage testbed-node-0] *************************************************** 2025-03-10 23:41:20.191229 | orchestrator | changed: [localhost] 2025-03-10 23:41:20.191770 | orchestrator | 2025-03-10 23:41:20.191806 | orchestrator | TASK [Manage testbed-node-1] *************************************************** 2025-03-10 23:41:22.874615 | orchestrator | changed: [localhost] 2025-03-10 23:41:22.875192 | orchestrator | 2025-03-10 23:41:22.875242 | orchestrator | TASK [Manage testbed-node-2] *************************************************** 2025-03-10 23:41:25.563769 | orchestrator | changed: [localhost] 2025-03-10 23:41:25.565086 | orchestrator | 2025-03-10 23:41:28.543755 | orchestrator | TASK [Manage testbed-node-3] *************************************************** 2025-03-10 23:41:28.543877 | orchestrator | changed: [localhost] 2025-03-10 23:41:28.544189 | orchestrator | 2025-03-10 23:41:28.544222 | orchestrator | TASK [Manage testbed-node-4] *************************************************** 2025-03-10 23:41:31.580876 | orchestrator | changed: [localhost] 2025-03-10 23:41:31.582356 | orchestrator | 2025-03-10 23:41:31.583389 | orchestrator | TASK [Manage testbed-node-5] *************************************************** 2025-03-10 23:41:34.251875 | orchestrator | changed: [localhost] 2025-03-10 23:41:36.916511 | orchestrator | 2025-03-10 23:41:36.916767 | orchestrator | TASK [Manage testbed-node-6] *************************************************** 2025-03-10 23:41:36.916808 | orchestrator | changed: [localhost] 2025-03-10 23:41:39.533643 | orchestrator | 2025-03-10 23:41:39.533750 | orchestrator | TASK [Manage testbed-node-7] *************************************************** 2025-03-10 23:41:39.533784 | orchestrator | changed: [localhost] 2025-03-10 23:41:39.533997 | orchestrator | 2025-03-10 23:41:39.534405 | orchestrator | TASK [Manage testbed-node-8] *************************************************** 2025-03-10 23:41:42.176914 | orchestrator | changed: [localhost] 2025-03-10 23:41:44.776108 | orchestrator | 2025-03-10 23:41:44.776303 | orchestrator | TASK [Manage testbed-node-9] *************************************************** 2025-03-10 23:41:44.776341 | orchestrator | changed: [localhost] 2025-03-10 23:41:44.777276 | orchestrator | 2025-03-10 23:41:44.777322 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:41:44.777474 | orchestrator | 2025-03-10 23:41:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:41:44.777502 | orchestrator | 2025-03-10 23:41:44 | INFO  | Please wait and do not abort execution. 2025-03-10 23:41:44.777536 | orchestrator | localhost : ok=15 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:41:44.778653 | orchestrator | 2025-03-10 23:41:45.177706 | orchestrator | + osism netbox connect 1000 --state a 2025-03-10 23:41:46.905752 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 151cbc42-a12d-4d07-ac37-f3418eb75005 for device testbed-node-7 is running in background 2025-03-10 23:41:46.908837 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task ceff5b25-fc19-473c-a3ca-704bf084676b for device testbed-node-8 is running in background 2025-03-10 23:41:46.912579 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 7fb68667-de76-4bb9-861f-09453f0a0d77 for device testbed-switch-1 is running in background 2025-03-10 23:41:46.915682 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 64b572fc-aa0c-4359-985f-d27402990e60 for device testbed-node-9 is running in background 2025-03-10 23:41:46.918318 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 156c5edb-e5df-4935-8f2c-b59f5f0d6fb6 for device testbed-node-3 is running in background 2025-03-10 23:41:46.921471 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 7b144268-16f0-4c61-8f5e-be9a0875cd14 for device testbed-node-2 is running in background 2025-03-10 23:41:46.924659 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task ef322654-bf05-4651-9dc4-79ca07f76c90 for device testbed-node-5 is running in background 2025-03-10 23:41:46.926364 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 948ce814-2d9e-4157-9599-cf8519708076 for device testbed-node-4 is running in background 2025-03-10 23:41:46.928515 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task db579236-67b7-47ca-95e1-a900b75aefc2 for device testbed-manager is running in background 2025-03-10 23:41:46.931340 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 5ba5ec67-4405-4dcc-9b67-92a26d24cf7b for device testbed-switch-0 is running in background 2025-03-10 23:41:46.934382 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 207ed265-1fd3-4911-9549-12df59dd2f1d for device testbed-switch-2 is running in background 2025-03-10 23:41:46.935173 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task 8ac1d0e2-38a7-4766-b521-8c763b190071 for device testbed-node-6 is running in background 2025-03-10 23:41:46.941565 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task b0db84fd-a463-4080-8bea-84f491759386 for device testbed-node-0 is running in background 2025-03-10 23:41:46.943187 | orchestrator | 2025-03-10 23:41:46 | INFO  | Task a9521f1c-6f54-41da-9812-8e18c75de195 for device testbed-node-1 is running in background 2025-03-10 23:41:47.207478 | orchestrator | 2025-03-10 23:41:46 | INFO  | Tasks are running in background. No more output. Check Flower for logs. 2025-03-10 23:41:47.207585 | orchestrator | + osism netbox disable --no-wait testbed-switch-0 2025-03-10 23:41:49.280460 | orchestrator | + osism netbox disable --no-wait testbed-switch-1 2025-03-10 23:41:51.330702 | orchestrator | + osism netbox disable --no-wait testbed-switch-2 2025-03-10 23:41:53.369394 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-03-10 23:41:53.874708 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-10 23:41:53.883979 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2025-03-10 23:41:53.884053 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2025-03-10 23:41:53.884070 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-03-10 23:41:53.884085 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 3 minutes ago Up 3 minutes (healthy) 8000/tcp 2025-03-10 23:41:53.884109 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" beat 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884124 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" conductor 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884138 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" flower 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884178 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2025-03-10 23:41:53.884193 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" listener 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884207 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb 3 minutes ago Up 3 minutes (healthy) 3306/tcp 2025-03-10 23:41:53.884220 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism-netbox:latest "/usr/bin/tini -- os…" netbox 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884234 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" openstack 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884248 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 3 minutes ago Up 3 minutes (healthy) 6379/tcp 2025-03-10 23:41:53.884265 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/usr/bin/tini -- os…" watchdog 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884280 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2025-03-10 23:41:53.884317 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2025-03-10 23:41:53.884333 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/usr/bin/tini -- sl…" osismclient 3 minutes ago Up 3 minutes (healthy) 2025-03-10 23:41:53.884358 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-03-10 23:41:54.139646 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-10 23:41:54.149358 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" netbox 10 minutes ago Up 9 minutes (healthy) 2025-03-10 23:41:54.149419 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" netbox-worker 10 minutes ago Up 4 minutes (healthy) 2025-03-10 23:41:54.149437 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.8-alpine "docker-entrypoint.s…" postgres 10 minutes ago Up 9 minutes (healthy) 5432/tcp 2025-03-10 23:41:54.149453 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 10 minutes ago Up 9 minutes (healthy) 6379/tcp 2025-03-10 23:41:54.149479 | orchestrator | ++ semver latest 7.0.0 2025-03-10 23:41:54.217316 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-10 23:41:54.221804 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-10 23:41:54.221847 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-03-10 23:41:54.221873 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-03-10 23:41:56.433704 | orchestrator | 2025-03-10 23:41:56 | INFO  | Task 396c643f-5d11-4b57-8855-0dd389237a3c (resolvconf) was prepared for execution. 2025-03-10 23:42:00.209238 | orchestrator | 2025-03-10 23:41:56 | INFO  | It takes a moment until task 396c643f-5d11-4b57-8855-0dd389237a3c (resolvconf) has been started and output is visible here. 2025-03-10 23:42:00.209379 | orchestrator | 2025-03-10 23:42:05.598287 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-03-10 23:42:05.598457 | orchestrator | 2025-03-10 23:42:05.598479 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:42:05.598494 | orchestrator | Monday 10 March 2025 23:42:00 +0000 (0:00:00.114) 0:00:00.114 ********** 2025-03-10 23:42:05.598527 | orchestrator | ok: [testbed-manager] 2025-03-10 23:42:05.657456 | orchestrator | 2025-03-10 23:42:05.657608 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-10 23:42:05.657637 | orchestrator | Monday 10 March 2025 23:42:05 +0000 (0:00:05.398) 0:00:05.514 ********** 2025-03-10 23:42:05.657680 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:42:05.657952 | orchestrator | 2025-03-10 23:42:05.658006 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-10 23:42:05.658193 | orchestrator | Monday 10 March 2025 23:42:05 +0000 (0:00:00.063) 0:00:05.577 ********** 2025-03-10 23:42:05.769239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-03-10 23:42:05.769807 | orchestrator | 2025-03-10 23:42:05.769844 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-10 23:42:05.770094 | orchestrator | Monday 10 March 2025 23:42:05 +0000 (0:00:00.111) 0:00:05.688 ********** 2025-03-10 23:42:05.862931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-03-10 23:42:05.864646 | orchestrator | 2025-03-10 23:42:05.864688 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-10 23:42:05.864851 | orchestrator | Monday 10 March 2025 23:42:05 +0000 (0:00:00.092) 0:00:05.781 ********** 2025-03-10 23:42:07.159112 | orchestrator | ok: [testbed-manager] 2025-03-10 23:42:07.160494 | orchestrator | 2025-03-10 23:42:07.160958 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-10 23:42:07.161027 | orchestrator | Monday 10 March 2025 23:42:07 +0000 (0:00:01.295) 0:00:07.077 ********** 2025-03-10 23:42:07.222721 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:42:07.222875 | orchestrator | 2025-03-10 23:42:07.223045 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-10 23:42:07.223463 | orchestrator | Monday 10 March 2025 23:42:07 +0000 (0:00:00.061) 0:00:07.139 ********** 2025-03-10 23:42:07.809444 | orchestrator | ok: [testbed-manager] 2025-03-10 23:42:07.911546 | orchestrator | 2025-03-10 23:42:07.911604 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-10 23:42:07.911620 | orchestrator | Monday 10 March 2025 23:42:07 +0000 (0:00:00.587) 0:00:07.726 ********** 2025-03-10 23:42:07.911644 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:42:07.912166 | orchestrator | 2025-03-10 23:42:07.913350 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-10 23:42:07.913378 | orchestrator | Monday 10 March 2025 23:42:07 +0000 (0:00:00.103) 0:00:07.829 ********** 2025-03-10 23:42:08.606598 | orchestrator | changed: [testbed-manager] 2025-03-10 23:42:08.607825 | orchestrator | 2025-03-10 23:42:08.607861 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-10 23:42:09.924688 | orchestrator | Monday 10 March 2025 23:42:08 +0000 (0:00:00.691) 0:00:08.521 ********** 2025-03-10 23:42:09.924811 | orchestrator | changed: [testbed-manager] 2025-03-10 23:42:09.925911 | orchestrator | 2025-03-10 23:42:09.926251 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-10 23:42:09.926604 | orchestrator | Monday 10 March 2025 23:42:09 +0000 (0:00:01.316) 0:00:09.838 ********** 2025-03-10 23:42:11.153406 | orchestrator | ok: [testbed-manager] 2025-03-10 23:42:11.290569 | orchestrator | 2025-03-10 23:42:11.290690 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-10 23:42:11.290710 | orchestrator | Monday 10 March 2025 23:42:11 +0000 (0:00:01.231) 0:00:11.069 ********** 2025-03-10 23:42:11.290745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-03-10 23:42:12.796250 | orchestrator | 2025-03-10 23:42:12.796364 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-10 23:42:12.796383 | orchestrator | Monday 10 March 2025 23:42:11 +0000 (0:00:00.135) 0:00:11.205 ********** 2025-03-10 23:42:12.796414 | orchestrator | changed: [testbed-manager] 2025-03-10 23:42:12.797446 | orchestrator | 2025-03-10 23:42:12.797478 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:42:12.797608 | orchestrator | 2025-03-10 23:42:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:42:12.797707 | orchestrator | 2025-03-10 23:42:12 | INFO  | Please wait and do not abort execution. 2025-03-10 23:42:12.799436 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-10 23:42:12.801235 | orchestrator | 2025-03-10 23:42:12.802396 | orchestrator | 2025-03-10 23:42:12.805260 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:42:12.810101 | orchestrator | Monday 10 March 2025 23:42:12 +0000 (0:00:01.505) 0:00:12.711 ********** 2025-03-10 23:42:12.810564 | orchestrator | =============================================================================== 2025-03-10 23:42:12.811208 | orchestrator | Gathering Facts --------------------------------------------------------- 5.40s 2025-03-10 23:42:12.814290 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.51s 2025-03-10 23:42:12.814385 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.32s 2025-03-10 23:42:12.814403 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.30s 2025-03-10 23:42:12.814417 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.23s 2025-03-10 23:42:12.814435 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.69s 2025-03-10 23:42:12.814799 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.59s 2025-03-10 23:42:12.815265 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.14s 2025-03-10 23:42:12.815535 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2025-03-10 23:42:12.821647 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.10s 2025-03-10 23:42:12.824800 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-03-10 23:42:12.828234 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-03-10 23:42:12.828624 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-03-10 23:42:13.434525 | orchestrator | + osism apply sshconfig 2025-03-10 23:42:15.196927 | orchestrator | 2025-03-10 23:42:15 | INFO  | Task 8a2b6a28-9e93-4b5b-b5cd-7f5c1988c567 (sshconfig) was prepared for execution. 2025-03-10 23:42:18.945223 | orchestrator | 2025-03-10 23:42:15 | INFO  | It takes a moment until task 8a2b6a28-9e93-4b5b-b5cd-7f5c1988c567 (sshconfig) has been started and output is visible here. 2025-03-10 23:42:18.945363 | orchestrator | 2025-03-10 23:42:18.945908 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-03-10 23:42:18.945943 | orchestrator | 2025-03-10 23:42:18.946537 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-03-10 23:42:18.947406 | orchestrator | Monday 10 March 2025 23:42:18 +0000 (0:00:00.134) 0:00:00.134 ********** 2025-03-10 23:42:19.524080 | orchestrator | ok: [testbed-manager] 2025-03-10 23:42:19.525348 | orchestrator | 2025-03-10 23:42:19.525389 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-03-10 23:42:19.525598 | orchestrator | Monday 10 March 2025 23:42:19 +0000 (0:00:00.579) 0:00:00.713 ********** 2025-03-10 23:42:20.173106 | orchestrator | changed: [testbed-manager] 2025-03-10 23:42:20.173405 | orchestrator | 2025-03-10 23:42:20.173443 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-03-10 23:42:20.173697 | orchestrator | Monday 10 March 2025 23:42:20 +0000 (0:00:00.649) 0:00:01.363 ********** 2025-03-10 23:42:26.945078 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-03-10 23:42:26.946991 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-10 23:42:26.947355 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-03-10 23:42:26.947919 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-03-10 23:42:26.949360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-03-10 23:42:26.950255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-03-10 23:42:26.950563 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-03-10 23:42:26.950596 | orchestrator | 2025-03-10 23:42:26.950617 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-03-10 23:42:26.951421 | orchestrator | Monday 10 March 2025 23:42:26 +0000 (0:00:06.772) 0:00:08.135 ********** 2025-03-10 23:42:27.015470 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:42:27.017772 | orchestrator | 2025-03-10 23:42:27.017813 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-03-10 23:42:27.695097 | orchestrator | Monday 10 March 2025 23:42:27 +0000 (0:00:00.072) 0:00:08.208 ********** 2025-03-10 23:42:27.695216 | orchestrator | changed: [testbed-manager] 2025-03-10 23:42:27.696149 | orchestrator | 2025-03-10 23:42:27.696391 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:42:27.696423 | orchestrator | 2025-03-10 23:42:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:42:27.696614 | orchestrator | 2025-03-10 23:42:27 | INFO  | Please wait and do not abort execution. 2025-03-10 23:42:27.697069 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:42:27.699499 | orchestrator | 2025-03-10 23:42:27.699708 | orchestrator | 2025-03-10 23:42:27.701195 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:42:27.701286 | orchestrator | Monday 10 March 2025 23:42:27 +0000 (0:00:00.679) 0:00:08.888 ********** 2025-03-10 23:42:27.701302 | orchestrator | =============================================================================== 2025-03-10 23:42:27.701320 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.77s 2025-03-10 23:42:27.701783 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.68s 2025-03-10 23:42:27.702128 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.65s 2025-03-10 23:42:27.702487 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-03-10 23:42:27.702989 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-03-10 23:42:28.280233 | orchestrator | + osism apply known-hosts 2025-03-10 23:42:30.030550 | orchestrator | 2025-03-10 23:42:30 | INFO  | Task b64b20e2-4c17-4809-9cc7-cde97840bb18 (known-hosts) was prepared for execution. 2025-03-10 23:42:33.643532 | orchestrator | 2025-03-10 23:42:30 | INFO  | It takes a moment until task b64b20e2-4c17-4809-9cc7-cde97840bb18 (known-hosts) has been started and output is visible here. 2025-03-10 23:42:33.643683 | orchestrator | 2025-03-10 23:42:40.014606 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-03-10 23:42:40.014743 | orchestrator | 2025-03-10 23:42:40.014765 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-03-10 23:42:40.014781 | orchestrator | Monday 10 March 2025 23:42:33 +0000 (0:00:00.144) 0:00:00.144 ********** 2025-03-10 23:42:40.014812 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-10 23:42:40.019411 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-10 23:42:40.019525 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-10 23:42:40.019544 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-10 23:42:40.019558 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-10 23:42:40.019572 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-10 23:42:40.019600 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-10 23:42:40.227033 | orchestrator | 2025-03-10 23:42:40.227113 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-03-10 23:42:40.227130 | orchestrator | Monday 10 March 2025 23:42:40 +0000 (0:00:06.373) 0:00:06.517 ********** 2025-03-10 23:42:40.227160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-10 23:42:40.227355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-10 23:42:40.227386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-10 23:42:40.228204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-10 23:42:40.228277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-10 23:42:40.228298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-10 23:42:40.228708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-10 23:42:40.228805 | orchestrator | 2025-03-10 23:42:40.229173 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:40.230009 | orchestrator | Monday 10 March 2025 23:42:40 +0000 (0:00:00.214) 0:00:06.732 ********** 2025-03-10 23:42:41.531696 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClYsfluK/UZ9uU7DGcjGJp/hg1AM2cz/YdfCExG42A7ojw1Ha187DfeWmJecGDq6mOe1LJtiktSLPGF27RgUuAJ77+FAmRVdwb6iafX9amZUujzSCrJsKwWKuYGIJl3/SBGLlOOfC3tv4Ry6gwG9tYcXGijXFTjn2Mze2SEbCDpHcWaEV+sG4u3Za845QpEI4LsgB5/Wc66pKWHK1WkhOVx410S1CXyUy1W35+jA7gVZ4/XPgp3fGAU0QToQ5JPlRfrk0iPZqSzO4pgIzJqydZOKUyNrDS6aYce97Ozyi4BsITGSPR6veCOeOJ8jJhNuN7HvpR1gtSC41VbwME0ZCYjGi07LJUh0sJTMpPpn7hvsB2o3qB8BmMdhrCA+EKqE6P9PJX7itSkcs+y36FZF3udQG5wrCBVCIdHj+Xv1bosUGZ1GoQkz+J/0tx/BaAKv6VOLKOTrK0VTlUu+nxFHxDsvce+BZcSYjwZKBq/Az3Ryjb0VJCQP7LHH1fnToecdM=) 2025-03-10 23:42:41.531875 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG24h9YBm29lsmZGs4EjO6qJTWHXDOrYZZJt9PYnxJDqspFwF6v+JD5rcwTjzYXncSOQjVdmEC/Y6Ri2CyFczc4=) 2025-03-10 23:42:41.531915 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOU/jyT+ok00U2f+HjNS1w5s06UtKuixxW2aivRC/zxv) 2025-03-10 23:42:41.532197 | orchestrator | 2025-03-10 23:42:41.532557 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:41.532758 | orchestrator | Monday 10 March 2025 23:42:41 +0000 (0:00:01.299) 0:00:08.031 ********** 2025-03-10 23:42:42.731806 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMqQI3aEGLzLIYsQyVGpd3JTFU5R2MnQQwbaJ6S5gZXlvYeCqNdDm2p6wrS1e+haI+/tAoKl91C9SBZBC1/7gaiwWzzGPJgfHGVs+XV71W4pKYirnTj+eBBOh5Q2qh0RxEG5JMOU8HptC+53mlvWo0FpUf3XpJoYOnSTcPgsxjd1xl2Ba/b3yWzij5jdGME86q0X9m9+0MrCLvrgqJr/iXfmvf+ifxl4YC0vzk265+uaJmqYanF7Tc9UQoSblKmF0DiwnT5CY1DRqIbJppHyrgqWEd3SbLigB4nk1MHKw6r1ZH8NmcKAL4shqN2G1f2P3pDijcwbIT5W7xUiVgcufMx6Enw3zD4YMV3siV08wsN87EYDtJYkMYwCZnJFAJBnQRrR6OPbEpReLqGz+oZs2rzT8/PR0hetfN/aDA2E00kgx6pIraVRYQbNY2ll2g7vKNHRdF/kRJQGcR6MIv7gGXGhq6Mkj2oWZr0Fc/sDTK0oRJtb72VSG/3+KFjDp9JEc=) 2025-03-10 23:42:42.732039 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpihAp59rLBR81XiDRSTlwK0xwIPmr/AhgiV9WSE5N7xEdq6SHhsoS1rh8X1fvYRk3USI1JJLiCvc7GH/LWyf0=) 2025-03-10 23:42:42.733387 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCfDzZcVemDvnxtVu+Tp07j9taTom3Td4AmBJI64DAa) 2025-03-10 23:42:42.734598 | orchestrator | 2025-03-10 23:42:42.735272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:42.736007 | orchestrator | Monday 10 March 2025 23:42:42 +0000 (0:00:01.204) 0:00:09.236 ********** 2025-03-10 23:42:43.862410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoN+WlPjVSEFdDGi+D1bEGkDz/TsxavHHjEIpKG5cPLFGbODMGxubQTC1xTcbW2ajO7rIOuI1HhaliN5tg4741XWfl+O+4eQssDZHfbtxujEmFMpHv2a+uVPfb5ECeRlp3m3tHPh/p5WPtJcLiUaCNizpFIE/daicS2brOqup1wHg/exzdjqL5RTqj61tL5i4VJMtmGLCoun28ocev9YwumG2jA/lOVnIaV1hrT8+tZ+IDNJ3KC8RqXuTuImgVIQPyhsLU/ZIYC7fs3GQu+ywC/9z/PMrQJJvic3IEN01KkWtFMQ9hzLEIp6ovwfx6/dS98ipOsXR9x7bTX12wCZX/PRYczUarYfp6OYgX3TO86ZocygZqmY92JVUtetD+DxZoFBN+IIcfDj4ZMcjWcxI6YnVKbK5a0NtfrGFfun/BbqS/q0XGOhxBPTbzA9MXOuf05U+szzuLVd2KUjs6rNeMbRSnQD1V6kICXEoB9oAL5YNhghkEpnZAjyrR4aHD3P8=) 2025-03-10 23:42:43.863126 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHEYNq+qz8ukBKvkhH+tdV0s5+bKUyBkjyBExO9OcPf8p7gRUz0AXWYZ25NbdJ0bXoAP2zHmJZtsUaOiUxOaMew=) 2025-03-10 23:42:43.863551 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG6YFrGi57hrU3pY5OwByMrvwGIZw3MhKFnB+IUx3Uoe) 2025-03-10 23:42:43.864734 | orchestrator | 2025-03-10 23:42:43.865072 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:43.865359 | orchestrator | Monday 10 March 2025 23:42:43 +0000 (0:00:01.129) 0:00:10.365 ********** 2025-03-10 23:42:45.065737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6CENTxI6L0YNxEcuCcTe9G/8QutYAbcZMRneXDjAcG5mHj1/kFnr9E34dYwr6f5KAmYREkMM/Q9YHofgXTZ8/TutdjD679+oloMPAJ6rWLB7WjCjOr8zyOA5pzIRG3TjQIzoahe/pfcw1ENfC/oaLTQAy6PTntLT/6wlctO8mSlEOdCaAc/0tBu72m73LwF/op7FxjKDHDsat/03H5L/bjIYe+I0pUVxKbZJ7OmkXEdzGWduiRbrb3J1/EtArofnNjUI1vkIxp5IG46+ugXg4EJK3in+5L7eeL1lp4d6L2tpnEnVbZsEL5b8aZHMhLofe/YUtSNxjIvevTxpOVHpGcF4WCh1Pa168OYpPw5aZnCHXqTn2MRjO5LMYg/1BuPWkLFdd+Z1D42x0hX/GbGqSowtlj3wI52uPhlBQtzUuIk7+R6o3viQCQh5+ycCvqnH8vf0UVzqAj0PhMZLk1YjleRJIeYNtRqMraIJkNZrTBFXmfmU9et5K/z4FJ0BjYx8=) 2025-03-10 23:42:45.068675 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjOA2Qyy/W/XZus+bNfV6EGE93uojb1GyLArUnTb1FHe6SN1/udV3VM7hwKwAi20fhB0jj/HJwS8qYk20kAVZU=) 2025-03-10 23:42:45.069512 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+oeCtWJuxwJUmGHDO51GLriHKKAJtqL966JLdyT1+r) 2025-03-10 23:42:45.071173 | orchestrator | 2025-03-10 23:42:45.071627 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:45.072057 | orchestrator | Monday 10 March 2025 23:42:45 +0000 (0:00:01.203) 0:00:11.569 ********** 2025-03-10 23:42:46.250183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP1mJYxp7urOj/Nei6pYH3x5Rap1l7Be6kTRzHhuqNxx) 2025-03-10 23:42:46.250334 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHg9EeWOZVGVd3BxlOeH9KvkPrPv8XreU7/kPUM9hl7Wwy/TVOA6XOJ2AldECizLZYmT0B78fPafc/nGLdgHPl+sgRTlEc+9Q3AF6gCiL75oBjdjP00TReRpATrK2ZfB4dbSkcZTn3OQHGPOWAqo/7NdHUIHqTZ7hnFfGvPf/bmQTfMEIYUNW059cniwWKZwOiof1Nol2wTjtyOfHRZHq5245dAsLms3l8VEhvWkMqcy6wHKfY1pvEDESUjDNkBtIT+qrfW6uZnnTROMYwnxNpdwlSDKE37KgYMv1IqFZ4u5RZXFTtgDMZgnFoFruCRcEUJKWQeKpzUofh+3s7ce63C7IqVLL+2D+wFRMipJLGUADvauj/U3vTPMiMVQwSByfEnw1/50u70WsayZxvc9V7dhD7Hsx7gpxMMVEG0ECLoYnI3Dl8n9RA+ay8gsw+G1RfQBZ6MGarzB5r2onZ4VQOMwkiHKIFobztrd0Bq08myTITuLfnBfs+najY/HmePbU=) 2025-03-10 23:42:46.250395 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDi6E4pXUx3Ra3dhbVB0cUtCtc5lsKjukM73xRoy+t3ZPgy/uQ+mR1eqynA2709WU9B7Jy+OrKHqiQV4XB2eKyg=) 2025-03-10 23:42:46.250411 | orchestrator | 2025-03-10 23:42:46.250429 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:46.251187 | orchestrator | Monday 10 March 2025 23:42:46 +0000 (0:00:01.181) 0:00:12.751 ********** 2025-03-10 23:42:47.380433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBJ9q9Otb+oKZWkfpFk8PbJLMKo2W01wEgWLE0IfEXm4mw4yFpOMvytgtBc4j1hf1RywiUQKRmNco/JLmLUetpGDOuGrI2lwa49z7AfeYNryF+63Oh9qchvWavZnvj2s2qoDr2siBWOPu8lKxK0LWxIBl09k1vQBR4yTNOcyezDb3yAsx9reanY4gd9QZrDMG7QgJJ5KIvUKK7deMd7EMHv4bbKrtLy/o4Xt7HCmEEELm71UsptmVPbqxL2Bu/hiA9pSmDazRvyszq5jczdx38jp4k/cEMmNpJQlIQInaU1UYijWRgYL4bYeo8vvPLKY0mNLG593e+afj7JHtrkJ2gZwFEWkJ2+ui4FLyNzCCn04unXDb7/HRHYcmoEC7xdbMpX6bn+CsAgDhJwGjTcH5c/7KhyHaxDj35ovXSyn0MT83LPE29OBch8s+Yo0DRkzj02dhf9fzGnSlK+mCoW/DhOhd7H+bxHIe5XNF7Ru54JRUCcbytBGtnobntTte6QA0=) 2025-03-10 23:42:47.381363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAmYVuFGzJ1+W0YAC2PsObZRX1KzPgMwTqOyfmP83jf8MyW9HdfQ9X7Qj2NEn6ZWef68VwoOjx2l3BRt9XQE+yM=) 2025-03-10 23:42:47.381401 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIeqth8SPopammdbWfG7t50dXVChPXKhw0jwnEbnhpjS) 2025-03-10 23:42:47.382081 | orchestrator | 2025-03-10 23:42:47.382116 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:48.541285 | orchestrator | Monday 10 March 2025 23:42:47 +0000 (0:00:01.132) 0:00:13.883 ********** 2025-03-10 23:42:48.541428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8sgFtCIAIhd/NmdagJ86b0iGVRjSDoe4Zr2jn2FQjYGK3+Ow4RhaA1cc6nYcQFD5fPfBE952snMzSKNfhQ49IJ3HQUYTJjtHGLaLOKrQTRCOa3aJ3GsKb7jSYB6j4deOwbR61par0PQU17zp8uxnFbOJZlBYy7rmvf47atU+FzVwhF8pCdtu2DeYIa3kHfwN7cdj5nZVone9WvkgDJo3zlXdki70M1nB5iTQjc/4bebE/ZLKDuvu7crbdya8xX5B1nhMGAT7mbwUdmy7jf32TRY3OMx3wLTTyXJU4UUOAHrqGhjRIY/cbtsTPO/WD7eD2U1DtPPVnWDbfklNjLQ5uNsHxihF0mv/FnzvDy++fe4kgDfvv1pnsfDSAJkCjUZYL3mCZ9uQrVBsP0bwaaYqdOMg4LjqiJ+gnQHxxUzJLNSKTZhcCiBI9a/YgIrj1Z/v5Ze8YBlxoUina4vljb46oKzxgD3TeuAO1lBkx19oxdAG17RGwfZ8nUqZ3Q5To6Zc=) 2025-03-10 23:42:48.541498 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBYnFsVe/hJk8hCfND8lZxSjxRy0CAn8W1PbRr/UFop3coC7HL1Jl+jx5qQsvRS1oW1S76f15xoos0iBUj0IIH4=) 2025-03-10 23:42:48.541517 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuO8tnNNS1k/euzGm+6U4FV5HhZHRdcnyggLRGerju0) 2025-03-10 23:42:48.541533 | orchestrator | 2025-03-10 23:42:48.541552 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-03-10 23:42:48.541780 | orchestrator | Monday 10 March 2025 23:42:48 +0000 (0:00:01.161) 0:00:15.044 ********** 2025-03-10 23:42:54.042402 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-10 23:42:54.042983 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-10 23:42:54.043025 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-10 23:42:54.043040 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-10 23:42:54.043063 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-10 23:42:54.043821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-10 23:42:54.044560 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-10 23:42:54.046196 | orchestrator | 2025-03-10 23:42:54.046238 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-03-10 23:42:54.046261 | orchestrator | Monday 10 March 2025 23:42:54 +0000 (0:00:05.498) 0:00:20.543 ********** 2025-03-10 23:42:54.251453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-10 23:42:54.251647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-10 23:42:54.251714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-10 23:42:54.251775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-10 23:42:54.252168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-10 23:42:54.252366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-10 23:42:54.252466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-10 23:42:54.253115 | orchestrator | 2025-03-10 23:42:54.253520 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:55.526387 | orchestrator | Monday 10 March 2025 23:42:54 +0000 (0:00:00.211) 0:00:20.755 ********** 2025-03-10 23:42:55.526543 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClYsfluK/UZ9uU7DGcjGJp/hg1AM2cz/YdfCExG42A7ojw1Ha187DfeWmJecGDq6mOe1LJtiktSLPGF27RgUuAJ77+FAmRVdwb6iafX9amZUujzSCrJsKwWKuYGIJl3/SBGLlOOfC3tv4Ry6gwG9tYcXGijXFTjn2Mze2SEbCDpHcWaEV+sG4u3Za845QpEI4LsgB5/Wc66pKWHK1WkhOVx410S1CXyUy1W35+jA7gVZ4/XPgp3fGAU0QToQ5JPlRfrk0iPZqSzO4pgIzJqydZOKUyNrDS6aYce97Ozyi4BsITGSPR6veCOeOJ8jJhNuN7HvpR1gtSC41VbwME0ZCYjGi07LJUh0sJTMpPpn7hvsB2o3qB8BmMdhrCA+EKqE6P9PJX7itSkcs+y36FZF3udQG5wrCBVCIdHj+Xv1bosUGZ1GoQkz+J/0tx/BaAKv6VOLKOTrK0VTlUu+nxFHxDsvce+BZcSYjwZKBq/Az3Ryjb0VJCQP7LHH1fnToecdM=) 2025-03-10 23:42:55.527064 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG24h9YBm29lsmZGs4EjO6qJTWHXDOrYZZJt9PYnxJDqspFwF6v+JD5rcwTjzYXncSOQjVdmEC/Y6Ri2CyFczc4=) 2025-03-10 23:42:55.527331 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOU/jyT+ok00U2f+HjNS1w5s06UtKuixxW2aivRC/zxv) 2025-03-10 23:42:55.528260 | orchestrator | 2025-03-10 23:42:55.529882 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:55.529936 | orchestrator | Monday 10 March 2025 23:42:55 +0000 (0:00:01.274) 0:00:22.030 ********** 2025-03-10 23:42:56.737907 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMqQI3aEGLzLIYsQyVGpd3JTFU5R2MnQQwbaJ6S5gZXlvYeCqNdDm2p6wrS1e+haI+/tAoKl91C9SBZBC1/7gaiwWzzGPJgfHGVs+XV71W4pKYirnTj+eBBOh5Q2qh0RxEG5JMOU8HptC+53mlvWo0FpUf3XpJoYOnSTcPgsxjd1xl2Ba/b3yWzij5jdGME86q0X9m9+0MrCLvrgqJr/iXfmvf+ifxl4YC0vzk265+uaJmqYanF7Tc9UQoSblKmF0DiwnT5CY1DRqIbJppHyrgqWEd3SbLigB4nk1MHKw6r1ZH8NmcKAL4shqN2G1f2P3pDijcwbIT5W7xUiVgcufMx6Enw3zD4YMV3siV08wsN87EYDtJYkMYwCZnJFAJBnQRrR6OPbEpReLqGz+oZs2rzT8/PR0hetfN/aDA2E00kgx6pIraVRYQbNY2ll2g7vKNHRdF/kRJQGcR6MIv7gGXGhq6Mkj2oWZr0Fc/sDTK0oRJtb72VSG/3+KFjDp9JEc=) 2025-03-10 23:42:56.739135 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpihAp59rLBR81XiDRSTlwK0xwIPmr/AhgiV9WSE5N7xEdq6SHhsoS1rh8X1fvYRk3USI1JJLiCvc7GH/LWyf0=) 2025-03-10 23:42:56.739186 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCfDzZcVemDvnxtVu+Tp07j9taTom3Td4AmBJI64DAa) 2025-03-10 23:42:56.740053 | orchestrator | 2025-03-10 23:42:56.740526 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:56.741300 | orchestrator | Monday 10 March 2025 23:42:56 +0000 (0:00:01.211) 0:00:23.241 ********** 2025-03-10 23:42:57.959284 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHEYNq+qz8ukBKvkhH+tdV0s5+bKUyBkjyBExO9OcPf8p7gRUz0AXWYZ25NbdJ0bXoAP2zHmJZtsUaOiUxOaMew=) 2025-03-10 23:42:57.960506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoN+WlPjVSEFdDGi+D1bEGkDz/TsxavHHjEIpKG5cPLFGbODMGxubQTC1xTcbW2ajO7rIOuI1HhaliN5tg4741XWfl+O+4eQssDZHfbtxujEmFMpHv2a+uVPfb5ECeRlp3m3tHPh/p5WPtJcLiUaCNizpFIE/daicS2brOqup1wHg/exzdjqL5RTqj61tL5i4VJMtmGLCoun28ocev9YwumG2jA/lOVnIaV1hrT8+tZ+IDNJ3KC8RqXuTuImgVIQPyhsLU/ZIYC7fs3GQu+ywC/9z/PMrQJJvic3IEN01KkWtFMQ9hzLEIp6ovwfx6/dS98ipOsXR9x7bTX12wCZX/PRYczUarYfp6OYgX3TO86ZocygZqmY92JVUtetD+DxZoFBN+IIcfDj4ZMcjWcxI6YnVKbK5a0NtfrGFfun/BbqS/q0XGOhxBPTbzA9MXOuf05U+szzuLVd2KUjs6rNeMbRSnQD1V6kICXEoB9oAL5YNhghkEpnZAjyrR4aHD3P8=) 2025-03-10 23:42:57.961588 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG6YFrGi57hrU3pY5OwByMrvwGIZw3MhKFnB+IUx3Uoe) 2025-03-10 23:42:57.963053 | orchestrator | 2025-03-10 23:42:57.964285 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:57.965961 | orchestrator | Monday 10 March 2025 23:42:57 +0000 (0:00:01.222) 0:00:24.463 ********** 2025-03-10 23:42:59.083947 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+oeCtWJuxwJUmGHDO51GLriHKKAJtqL966JLdyT1+r) 2025-03-10 23:42:59.085466 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6CENTxI6L0YNxEcuCcTe9G/8QutYAbcZMRneXDjAcG5mHj1/kFnr9E34dYwr6f5KAmYREkMM/Q9YHofgXTZ8/TutdjD679+oloMPAJ6rWLB7WjCjOr8zyOA5pzIRG3TjQIzoahe/pfcw1ENfC/oaLTQAy6PTntLT/6wlctO8mSlEOdCaAc/0tBu72m73LwF/op7FxjKDHDsat/03H5L/bjIYe+I0pUVxKbZJ7OmkXEdzGWduiRbrb3J1/EtArofnNjUI1vkIxp5IG46+ugXg4EJK3in+5L7eeL1lp4d6L2tpnEnVbZsEL5b8aZHMhLofe/YUtSNxjIvevTxpOVHpGcF4WCh1Pa168OYpPw5aZnCHXqTn2MRjO5LMYg/1BuPWkLFdd+Z1D42x0hX/GbGqSowtlj3wI52uPhlBQtzUuIk7+R6o3viQCQh5+ycCvqnH8vf0UVzqAj0PhMZLk1YjleRJIeYNtRqMraIJkNZrTBFXmfmU9et5K/z4FJ0BjYx8=) 2025-03-10 23:42:59.087131 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjOA2Qyy/W/XZus+bNfV6EGE93uojb1GyLArUnTb1FHe6SN1/udV3VM7hwKwAi20fhB0jj/HJwS8qYk20kAVZU=) 2025-03-10 23:42:59.087164 | orchestrator | 2025-03-10 23:42:59.087186 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:42:59.089154 | orchestrator | Monday 10 March 2025 23:42:59 +0000 (0:00:01.124) 0:00:25.587 ********** 2025-03-10 23:43:00.253418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDi6E4pXUx3Ra3dhbVB0cUtCtc5lsKjukM73xRoy+t3ZPgy/uQ+mR1eqynA2709WU9B7Jy+OrKHqiQV4XB2eKyg=) 2025-03-10 23:43:00.253602 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHg9EeWOZVGVd3BxlOeH9KvkPrPv8XreU7/kPUM9hl7Wwy/TVOA6XOJ2AldECizLZYmT0B78fPafc/nGLdgHPl+sgRTlEc+9Q3AF6gCiL75oBjdjP00TReRpATrK2ZfB4dbSkcZTn3OQHGPOWAqo/7NdHUIHqTZ7hnFfGvPf/bmQTfMEIYUNW059cniwWKZwOiof1Nol2wTjtyOfHRZHq5245dAsLms3l8VEhvWkMqcy6wHKfY1pvEDESUjDNkBtIT+qrfW6uZnnTROMYwnxNpdwlSDKE37KgYMv1IqFZ4u5RZXFTtgDMZgnFoFruCRcEUJKWQeKpzUofh+3s7ce63C7IqVLL+2D+wFRMipJLGUADvauj/U3vTPMiMVQwSByfEnw1/50u70WsayZxvc9V7dhD7Hsx7gpxMMVEG0ECLoYnI3Dl8n9RA+ay8gsw+G1RfQBZ6MGarzB5r2onZ4VQOMwkiHKIFobztrd0Bq08myTITuLfnBfs+najY/HmePbU=) 2025-03-10 23:43:00.253658 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP1mJYxp7urOj/Nei6pYH3x5Rap1l7Be6kTRzHhuqNxx) 2025-03-10 23:43:00.253677 | orchestrator | 2025-03-10 23:43:00.253700 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:43:00.255646 | orchestrator | Monday 10 March 2025 23:43:00 +0000 (0:00:01.170) 0:00:26.758 ********** 2025-03-10 23:43:01.470429 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAmYVuFGzJ1+W0YAC2PsObZRX1KzPgMwTqOyfmP83jf8MyW9HdfQ9X7Qj2NEn6ZWef68VwoOjx2l3BRt9XQE+yM=) 2025-03-10 23:43:01.471174 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBJ9q9Otb+oKZWkfpFk8PbJLMKo2W01wEgWLE0IfEXm4mw4yFpOMvytgtBc4j1hf1RywiUQKRmNco/JLmLUetpGDOuGrI2lwa49z7AfeYNryF+63Oh9qchvWavZnvj2s2qoDr2siBWOPu8lKxK0LWxIBl09k1vQBR4yTNOcyezDb3yAsx9reanY4gd9QZrDMG7QgJJ5KIvUKK7deMd7EMHv4bbKrtLy/o4Xt7HCmEEELm71UsptmVPbqxL2Bu/hiA9pSmDazRvyszq5jczdx38jp4k/cEMmNpJQlIQInaU1UYijWRgYL4bYeo8vvPLKY0mNLG593e+afj7JHtrkJ2gZwFEWkJ2+ui4FLyNzCCn04unXDb7/HRHYcmoEC7xdbMpX6bn+CsAgDhJwGjTcH5c/7KhyHaxDj35ovXSyn0MT83LPE29OBch8s+Yo0DRkzj02dhf9fzGnSlK+mCoW/DhOhd7H+bxHIe5XNF7Ru54JRUCcbytBGtnobntTte6QA0=) 2025-03-10 23:43:01.471216 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIeqth8SPopammdbWfG7t50dXVChPXKhw0jwnEbnhpjS) 2025-03-10 23:43:01.471580 | orchestrator | 2025-03-10 23:43:01.472174 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-10 23:43:01.472582 | orchestrator | Monday 10 March 2025 23:43:01 +0000 (0:00:01.212) 0:00:27.971 ********** 2025-03-10 23:43:02.736009 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8sgFtCIAIhd/NmdagJ86b0iGVRjSDoe4Zr2jn2FQjYGK3+Ow4RhaA1cc6nYcQFD5fPfBE952snMzSKNfhQ49IJ3HQUYTJjtHGLaLOKrQTRCOa3aJ3GsKb7jSYB6j4deOwbR61par0PQU17zp8uxnFbOJZlBYy7rmvf47atU+FzVwhF8pCdtu2DeYIa3kHfwN7cdj5nZVone9WvkgDJo3zlXdki70M1nB5iTQjc/4bebE/ZLKDuvu7crbdya8xX5B1nhMGAT7mbwUdmy7jf32TRY3OMx3wLTTyXJU4UUOAHrqGhjRIY/cbtsTPO/WD7eD2U1DtPPVnWDbfklNjLQ5uNsHxihF0mv/FnzvDy++fe4kgDfvv1pnsfDSAJkCjUZYL3mCZ9uQrVBsP0bwaaYqdOMg4LjqiJ+gnQHxxUzJLNSKTZhcCiBI9a/YgIrj1Z/v5Ze8YBlxoUina4vljb46oKzxgD3TeuAO1lBkx19oxdAG17RGwfZ8nUqZ3Q5To6Zc=) 2025-03-10 23:43:02.737112 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBYnFsVe/hJk8hCfND8lZxSjxRy0CAn8W1PbRr/UFop3coC7HL1Jl+jx5qQsvRS1oW1S76f15xoos0iBUj0IIH4=) 2025-03-10 23:43:02.737161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuO8tnNNS1k/euzGm+6U4FV5HhZHRdcnyggLRGerju0) 2025-03-10 23:43:02.737771 | orchestrator | 2025-03-10 23:43:02.738408 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-03-10 23:43:02.738591 | orchestrator | Monday 10 March 2025 23:43:02 +0000 (0:00:01.266) 0:00:29.237 ********** 2025-03-10 23:43:02.922984 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-10 23:43:02.924013 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-10 23:43:02.924038 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-10 23:43:02.924058 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-10 23:43:02.925023 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-10 23:43:02.925064 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-10 23:43:02.925423 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-10 23:43:02.925877 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:43:02.926215 | orchestrator | 2025-03-10 23:43:02.926521 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-03-10 23:43:02.926869 | orchestrator | Monday 10 March 2025 23:43:02 +0000 (0:00:00.190) 0:00:29.427 ********** 2025-03-10 23:43:03.119720 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:43:03.120456 | orchestrator | 2025-03-10 23:43:03.120996 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-03-10 23:43:03.121541 | orchestrator | Monday 10 March 2025 23:43:03 +0000 (0:00:00.197) 0:00:29.624 ********** 2025-03-10 23:43:03.176504 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:43:03.177011 | orchestrator | 2025-03-10 23:43:03.177762 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-03-10 23:43:03.178555 | orchestrator | Monday 10 March 2025 23:43:03 +0000 (0:00:00.057) 0:00:29.681 ********** 2025-03-10 23:43:03.798403 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:03.798631 | orchestrator | 2025-03-10 23:43:03.798668 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:43:03.799385 | orchestrator | 2025-03-10 23:43:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:43:03.799411 | orchestrator | 2025-03-10 23:43:03 | INFO  | Please wait and do not abort execution. 2025-03-10 23:43:03.799435 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-10 23:43:03.800022 | orchestrator | 2025-03-10 23:43:03.801079 | orchestrator | 2025-03-10 23:43:03.801325 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:43:03.801821 | orchestrator | Monday 10 March 2025 23:43:03 +0000 (0:00:00.617) 0:00:30.299 ********** 2025-03-10 23:43:03.802391 | orchestrator | =============================================================================== 2025-03-10 23:43:03.802821 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.37s 2025-03-10 23:43:03.803140 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.50s 2025-03-10 23:43:03.803499 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.30s 2025-03-10 23:43:03.803991 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2025-03-10 23:43:03.804265 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2025-03-10 23:43:03.804609 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-03-10 23:43:03.805082 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-10 23:43:03.805366 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-10 23:43:03.805862 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-03-10 23:43:03.806301 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-03-10 23:43:03.806478 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-03-10 23:43:03.806789 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-03-10 23:43:03.808107 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-03-10 23:43:03.808316 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-03-10 23:43:03.808342 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-03-10 23:43:03.808358 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-03-10 23:43:03.808377 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.62s 2025-03-10 23:43:03.808781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.21s 2025-03-10 23:43:03.808964 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2025-03-10 23:43:03.809401 | orchestrator | osism.commons.known_hosts : Write extra known_hosts entries ------------- 0.20s 2025-03-10 23:43:04.345518 | orchestrator | + osism apply squid 2025-03-10 23:43:06.023304 | orchestrator | 2025-03-10 23:43:06 | INFO  | Task d24ab0c0-1a27-42a3-9fa5-b89ed4685c5c (squid) was prepared for execution. 2025-03-10 23:43:09.645526 | orchestrator | 2025-03-10 23:43:06 | INFO  | It takes a moment until task d24ab0c0-1a27-42a3-9fa5-b89ed4685c5c (squid) has been started and output is visible here. 2025-03-10 23:43:09.645648 | orchestrator | 2025-03-10 23:43:09.647010 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-03-10 23:43:09.647041 | orchestrator | 2025-03-10 23:43:09.647055 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-03-10 23:43:09.647074 | orchestrator | Monday 10 March 2025 23:43:09 +0000 (0:00:00.147) 0:00:00.147 ********** 2025-03-10 23:43:09.736573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-03-10 23:43:09.737230 | orchestrator | 2025-03-10 23:43:09.739240 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-03-10 23:43:11.350779 | orchestrator | Monday 10 March 2025 23:43:09 +0000 (0:00:00.096) 0:00:00.244 ********** 2025-03-10 23:43:11.350958 | orchestrator | ok: [testbed-manager] 2025-03-10 23:43:11.351093 | orchestrator | 2025-03-10 23:43:11.351116 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-03-10 23:43:11.351136 | orchestrator | Monday 10 March 2025 23:43:11 +0000 (0:00:01.612) 0:00:01.857 ********** 2025-03-10 23:43:12.658259 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-03-10 23:43:12.658437 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-03-10 23:43:12.659428 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-03-10 23:43:12.660017 | orchestrator | 2025-03-10 23:43:12.661061 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-03-10 23:43:12.661287 | orchestrator | Monday 10 March 2025 23:43:12 +0000 (0:00:01.308) 0:00:03.165 ********** 2025-03-10 23:43:13.868831 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-03-10 23:43:13.869023 | orchestrator | 2025-03-10 23:43:13.869055 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-03-10 23:43:14.289445 | orchestrator | Monday 10 March 2025 23:43:13 +0000 (0:00:01.210) 0:00:04.375 ********** 2025-03-10 23:43:14.289605 | orchestrator | ok: [testbed-manager] 2025-03-10 23:43:14.289674 | orchestrator | 2025-03-10 23:43:14.289696 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-03-10 23:43:14.290104 | orchestrator | Monday 10 March 2025 23:43:14 +0000 (0:00:00.422) 0:00:04.798 ********** 2025-03-10 23:43:15.320889 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:15.321196 | orchestrator | 2025-03-10 23:43:15.321228 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-03-10 23:43:15.321250 | orchestrator | Monday 10 March 2025 23:43:15 +0000 (0:00:01.029) 0:00:05.827 ********** 2025-03-10 23:43:43.248090 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-03-10 23:43:43.248550 | orchestrator | ok: [testbed-manager] 2025-03-10 23:43:43.248587 | orchestrator | 2025-03-10 23:43:43.248608 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-03-10 23:43:43.250332 | orchestrator | Monday 10 March 2025 23:43:43 +0000 (0:00:27.924) 0:00:33.752 ********** 2025-03-10 23:43:55.651666 | orchestrator | changed: [testbed-manager] 2025-03-10 23:43:55.651947 | orchestrator | 2025-03-10 23:43:55.651980 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-03-10 23:43:55.652003 | orchestrator | Monday 10 March 2025 23:43:55 +0000 (0:00:12.401) 0:00:46.154 ********** 2025-03-10 23:44:55.732558 | orchestrator | Pausing for 60 seconds 2025-03-10 23:44:55.802941 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:55.802976 | orchestrator | 2025-03-10 23:44:55.802987 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-03-10 23:44:55.802998 | orchestrator | Monday 10 March 2025 23:44:55 +0000 (0:01:00.083) 0:01:46.237 ********** 2025-03-10 23:44:55.803039 | orchestrator | ok: [testbed-manager] 2025-03-10 23:44:55.803756 | orchestrator | 2025-03-10 23:44:55.804825 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-03-10 23:44:55.806165 | orchestrator | Monday 10 March 2025 23:44:55 +0000 (0:00:00.071) 0:01:46.309 ********** 2025-03-10 23:44:56.501285 | orchestrator | changed: [testbed-manager] 2025-03-10 23:44:56.501984 | orchestrator | 2025-03-10 23:44:56.502069 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:44:56.502222 | orchestrator | 2025-03-10 23:44:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:44:56.502244 | orchestrator | 2025-03-10 23:44:56 | INFO  | Please wait and do not abort execution. 2025-03-10 23:44:56.502263 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:44:56.502907 | orchestrator | 2025-03-10 23:44:56.503191 | orchestrator | 2025-03-10 23:44:56.503540 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:44:56.504022 | orchestrator | Monday 10 March 2025 23:44:56 +0000 (0:00:00.701) 0:01:47.011 ********** 2025-03-10 23:44:56.504451 | orchestrator | =============================================================================== 2025-03-10 23:44:56.505029 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-03-10 23:44:56.505854 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 27.92s 2025-03-10 23:44:56.506309 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.40s 2025-03-10 23:44:56.506751 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.61s 2025-03-10 23:44:56.507121 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.31s 2025-03-10 23:44:56.508093 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.21s 2025-03-10 23:44:56.509433 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.03s 2025-03-10 23:44:56.510636 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.70s 2025-03-10 23:44:56.511224 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.42s 2025-03-10 23:44:56.511934 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-03-10 23:44:57.046926 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-03-10 23:44:57.047024 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-03-10 23:44:57.047277 | orchestrator | ++ semver latest 9.0.0 2025-03-10 23:44:57.102257 | orchestrator | + [[ -1 -lt 0 ]] 2025-03-10 23:44:57.103498 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-03-10 23:44:57.103527 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-03-10 23:44:58.822599 | orchestrator | 2025-03-10 23:44:58 | INFO  | Task 7fb3d4aa-51f0-42dc-bdd4-45f6550ad82e (operator) was prepared for execution. 2025-03-10 23:45:02.499302 | orchestrator | 2025-03-10 23:44:58 | INFO  | It takes a moment until task 7fb3d4aa-51f0-42dc-bdd4-45f6550ad82e (operator) has been started and output is visible here. 2025-03-10 23:45:02.499461 | orchestrator | 2025-03-10 23:45:02.499927 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-03-10 23:45:02.500035 | orchestrator | 2025-03-10 23:45:02.500069 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-10 23:45:02.500335 | orchestrator | Monday 10 March 2025 23:45:02 +0000 (0:00:00.119) 0:00:00.119 ********** 2025-03-10 23:45:06.343880 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:45:06.344828 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:45:06.345922 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:06.347150 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:06.348115 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:45:06.349048 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:06.351285 | orchestrator | 2025-03-10 23:45:06.351539 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-03-10 23:45:06.352306 | orchestrator | Monday 10 March 2025 23:45:06 +0000 (0:00:03.847) 0:00:03.966 ********** 2025-03-10 23:45:07.243425 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:45:07.245844 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:07.245881 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:45:07.246352 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:07.246838 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:45:07.246871 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:07.246890 | orchestrator | 2025-03-10 23:45:07.247098 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-03-10 23:45:07.247335 | orchestrator | 2025-03-10 23:45:07.247896 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-10 23:45:07.248180 | orchestrator | Monday 10 March 2025 23:45:07 +0000 (0:00:00.899) 0:00:04.865 ********** 2025-03-10 23:45:07.326120 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:45:07.354308 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:45:07.384441 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:45:07.432631 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:07.434079 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:07.435173 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:07.436224 | orchestrator | 2025-03-10 23:45:07.437049 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-10 23:45:07.438418 | orchestrator | Monday 10 March 2025 23:45:07 +0000 (0:00:00.190) 0:00:05.056 ********** 2025-03-10 23:45:07.527475 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:45:07.560263 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:45:07.585260 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:45:07.636807 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:07.637048 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:07.639416 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:07.640599 | orchestrator | 2025-03-10 23:45:07.641005 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-10 23:45:07.641858 | orchestrator | Monday 10 March 2025 23:45:07 +0000 (0:00:00.204) 0:00:05.261 ********** 2025-03-10 23:45:08.316899 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:08.317041 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:08.317067 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:08.317640 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:08.317872 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:08.318192 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:08.318414 | orchestrator | 2025-03-10 23:45:08.318779 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-10 23:45:08.319078 | orchestrator | Monday 10 March 2025 23:45:08 +0000 (0:00:00.675) 0:00:05.937 ********** 2025-03-10 23:45:09.183332 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:09.184312 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:09.185135 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:09.185892 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:09.187977 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:09.188564 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:09.188590 | orchestrator | 2025-03-10 23:45:09.188610 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-10 23:45:09.189106 | orchestrator | Monday 10 March 2025 23:45:09 +0000 (0:00:00.869) 0:00:06.807 ********** 2025-03-10 23:45:10.438985 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-03-10 23:45:10.439159 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-03-10 23:45:10.439181 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-03-10 23:45:10.439199 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-03-10 23:45:10.439530 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-03-10 23:45:10.440348 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-03-10 23:45:10.442332 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-03-10 23:45:10.443931 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-03-10 23:45:10.443982 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-03-10 23:45:10.443996 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-03-10 23:45:10.444014 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-03-10 23:45:10.444132 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-03-10 23:45:10.444693 | orchestrator | 2025-03-10 23:45:10.445172 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-10 23:45:10.445854 | orchestrator | Monday 10 March 2025 23:45:10 +0000 (0:00:01.253) 0:00:08.060 ********** 2025-03-10 23:45:11.827163 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:11.827397 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:11.827431 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:11.827696 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:11.828237 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:11.828799 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:11.828830 | orchestrator | 2025-03-10 23:45:11.829103 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-10 23:45:11.829339 | orchestrator | Monday 10 March 2025 23:45:11 +0000 (0:00:01.387) 0:00:09.448 ********** 2025-03-10 23:45:13.005492 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-03-10 23:45:13.005660 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-03-10 23:45:13.005687 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-03-10 23:45:13.163503 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:45:13.163676 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:45:13.163960 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:45:13.164379 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:45:13.165945 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:45:13.166457 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-03-10 23:45:13.166580 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-03-10 23:45:13.166608 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-03-10 23:45:13.166669 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-03-10 23:45:13.166687 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-03-10 23:45:13.166705 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-03-10 23:45:13.167633 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-03-10 23:45:13.168728 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:45:13.169447 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:45:13.170145 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:45:13.170180 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:45:13.170980 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:45:13.171691 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-03-10 23:45:13.172180 | orchestrator | 2025-03-10 23:45:13.172631 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-10 23:45:13.173303 | orchestrator | Monday 10 March 2025 23:45:13 +0000 (0:00:01.338) 0:00:10.787 ********** 2025-03-10 23:45:13.906967 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:13.908666 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:13.908712 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:13.909458 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:13.909837 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:13.910186 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:13.910740 | orchestrator | 2025-03-10 23:45:13.911016 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-10 23:45:13.911316 | orchestrator | Monday 10 March 2025 23:45:13 +0000 (0:00:00.740) 0:00:11.528 ********** 2025-03-10 23:45:13.990784 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:45:14.031260 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:45:14.058877 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:45:14.132371 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:45:14.132574 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:45:14.132699 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:45:14.132721 | orchestrator | 2025-03-10 23:45:14.133126 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-10 23:45:14.133421 | orchestrator | Monday 10 March 2025 23:45:14 +0000 (0:00:00.228) 0:00:11.756 ********** 2025-03-10 23:45:15.019638 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-10 23:45:15.021113 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:15.021735 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-10 23:45:15.022445 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:15.023304 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-10 23:45:15.023740 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:15.024420 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-10 23:45:15.024917 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:15.025546 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-10 23:45:15.026171 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:15.026576 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-10 23:45:15.027130 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:15.027435 | orchestrator | 2025-03-10 23:45:15.028111 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-10 23:45:15.028690 | orchestrator | Monday 10 March 2025 23:45:15 +0000 (0:00:00.884) 0:00:12.641 ********** 2025-03-10 23:45:15.076456 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:45:15.102336 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:45:15.134364 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:45:15.170578 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:45:15.216541 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:45:15.217701 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:45:15.219028 | orchestrator | 2025-03-10 23:45:15.220070 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-10 23:45:15.221453 | orchestrator | Monday 10 March 2025 23:45:15 +0000 (0:00:00.199) 0:00:12.840 ********** 2025-03-10 23:45:15.275685 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:45:15.307370 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:45:15.344131 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:45:15.368726 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:45:15.412162 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:45:15.413156 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:45:15.414685 | orchestrator | 2025-03-10 23:45:15.416314 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-10 23:45:15.417604 | orchestrator | Monday 10 March 2025 23:45:15 +0000 (0:00:00.194) 0:00:13.034 ********** 2025-03-10 23:45:15.500633 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:45:15.532007 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:45:15.555535 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:45:15.596302 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:45:15.596933 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:45:15.597548 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:45:15.598123 | orchestrator | 2025-03-10 23:45:15.598608 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-10 23:45:15.599234 | orchestrator | Monday 10 March 2025 23:45:15 +0000 (0:00:00.185) 0:00:13.220 ********** 2025-03-10 23:45:16.405071 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:16.405667 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:16.405706 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:16.405866 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:16.406441 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:16.406936 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:16.408248 | orchestrator | 2025-03-10 23:45:16.409162 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-10 23:45:16.409716 | orchestrator | Monday 10 March 2025 23:45:16 +0000 (0:00:00.808) 0:00:14.029 ********** 2025-03-10 23:45:16.511209 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:45:16.533948 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:45:16.555169 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:45:16.686206 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:45:16.687022 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:45:16.688607 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:45:16.688973 | orchestrator | 2025-03-10 23:45:16.689679 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:45:16.690476 | orchestrator | 2025-03-10 23:45:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:45:16.690981 | orchestrator | 2025-03-10 23:45:16 | INFO  | Please wait and do not abort execution. 2025-03-10 23:45:16.691872 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-10 23:45:16.692610 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-10 23:45:16.693138 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-10 23:45:16.694125 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-10 23:45:16.694808 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-10 23:45:16.695667 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-10 23:45:16.696622 | orchestrator | 2025-03-10 23:45:16.697413 | orchestrator | 2025-03-10 23:45:16.697889 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:45:16.698702 | orchestrator | Monday 10 March 2025 23:45:16 +0000 (0:00:00.281) 0:00:14.311 ********** 2025-03-10 23:45:16.698944 | orchestrator | =============================================================================== 2025-03-10 23:45:16.699874 | orchestrator | Gathering Facts --------------------------------------------------------- 3.85s 2025-03-10 23:45:16.700428 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.39s 2025-03-10 23:45:16.701671 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2025-03-10 23:45:16.702204 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2025-03-10 23:45:16.703140 | orchestrator | Do not require tty for all users ---------------------------------------- 0.90s 2025-03-10 23:45:16.704158 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.88s 2025-03-10 23:45:16.705194 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-03-10 23:45:16.705817 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.81s 2025-03-10 23:45:16.706295 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.74s 2025-03-10 23:45:16.706884 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2025-03-10 23:45:16.707502 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2025-03-10 23:45:16.708130 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.23s 2025-03-10 23:45:16.708603 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-03-10 23:45:16.709245 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2025-03-10 23:45:16.709857 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2025-03-10 23:45:16.710498 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-03-10 23:45:16.711381 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2025-03-10 23:45:17.220291 | orchestrator | + osism apply --environment custom facts 2025-03-10 23:45:18.872662 | orchestrator | 2025-03-10 23:45:18 | INFO  | Trying to run play facts in environment custom 2025-03-10 23:45:18.928948 | orchestrator | 2025-03-10 23:45:18 | INFO  | Task 28a277f0-db62-4330-b2ac-005aa7eccb7c (facts) was prepared for execution. 2025-03-10 23:45:22.589968 | orchestrator | 2025-03-10 23:45:18 | INFO  | It takes a moment until task 28a277f0-db62-4330-b2ac-005aa7eccb7c (facts) has been started and output is visible here. 2025-03-10 23:45:22.590191 | orchestrator | 2025-03-10 23:45:22.590278 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-03-10 23:45:22.590321 | orchestrator | 2025-03-10 23:45:22.595096 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-10 23:45:22.595930 | orchestrator | Monday 10 March 2025 23:45:22 +0000 (0:00:00.099) 0:00:00.099 ********** 2025-03-10 23:45:24.074502 | orchestrator | ok: [testbed-manager] 2025-03-10 23:45:24.075206 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:24.075682 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:24.075801 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:24.077076 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:24.077169 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:24.078441 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:24.079252 | orchestrator | 2025-03-10 23:45:24.079935 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-03-10 23:45:24.080367 | orchestrator | Monday 10 March 2025 23:45:24 +0000 (0:00:01.484) 0:00:01.583 ********** 2025-03-10 23:45:25.395329 | orchestrator | ok: [testbed-manager] 2025-03-10 23:45:25.395491 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:45:25.395934 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:45:25.396203 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:25.396863 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:25.397613 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:45:25.397916 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:25.398343 | orchestrator | 2025-03-10 23:45:25.399196 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-03-10 23:45:25.400916 | orchestrator | 2025-03-10 23:45:25.401835 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-10 23:45:25.402603 | orchestrator | Monday 10 March 2025 23:45:25 +0000 (0:00:01.324) 0:00:02.907 ********** 2025-03-10 23:45:25.523602 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:25.523984 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:25.524351 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:25.524994 | orchestrator | 2025-03-10 23:45:25.525541 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-10 23:45:25.525898 | orchestrator | Monday 10 March 2025 23:45:25 +0000 (0:00:00.127) 0:00:03.035 ********** 2025-03-10 23:45:25.700134 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:25.704341 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:25.840130 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:25.840197 | orchestrator | 2025-03-10 23:45:25.840215 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-10 23:45:25.840230 | orchestrator | Monday 10 March 2025 23:45:25 +0000 (0:00:00.175) 0:00:03.211 ********** 2025-03-10 23:45:25.840255 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:25.840771 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:25.841486 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:25.842006 | orchestrator | 2025-03-10 23:45:25.842784 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-10 23:45:25.843542 | orchestrator | Monday 10 March 2025 23:45:25 +0000 (0:00:00.142) 0:00:03.353 ********** 2025-03-10 23:45:26.005451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:45:26.006219 | orchestrator | 2025-03-10 23:45:26.006824 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-10 23:45:26.007561 | orchestrator | Monday 10 March 2025 23:45:26 +0000 (0:00:00.165) 0:00:03.519 ********** 2025-03-10 23:45:26.474622 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:26.474844 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:26.474912 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:26.475926 | orchestrator | 2025-03-10 23:45:26.476547 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-10 23:45:26.476579 | orchestrator | Monday 10 March 2025 23:45:26 +0000 (0:00:00.468) 0:00:03.988 ********** 2025-03-10 23:45:26.613677 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:45:26.613859 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:45:26.614491 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:45:26.614795 | orchestrator | 2025-03-10 23:45:26.615313 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-10 23:45:26.615654 | orchestrator | Monday 10 March 2025 23:45:26 +0000 (0:00:00.136) 0:00:04.124 ********** 2025-03-10 23:45:27.659148 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:27.660662 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:27.660786 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:27.660909 | orchestrator | 2025-03-10 23:45:27.661326 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-10 23:45:27.661830 | orchestrator | Monday 10 March 2025 23:45:27 +0000 (0:00:01.046) 0:00:05.171 ********** 2025-03-10 23:45:28.147504 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:28.148058 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:28.148320 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:28.149092 | orchestrator | 2025-03-10 23:45:28.150012 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-10 23:45:28.150202 | orchestrator | Monday 10 March 2025 23:45:28 +0000 (0:00:00.485) 0:00:05.657 ********** 2025-03-10 23:45:29.337705 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:29.337939 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:29.338416 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:29.339215 | orchestrator | 2025-03-10 23:45:29.339860 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-10 23:45:29.340489 | orchestrator | Monday 10 March 2025 23:45:29 +0000 (0:00:01.191) 0:00:06.848 ********** 2025-03-10 23:45:43.259493 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:43.376172 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:43.376266 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:43.376283 | orchestrator | 2025-03-10 23:45:43.376300 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-03-10 23:45:43.376317 | orchestrator | Monday 10 March 2025 23:45:43 +0000 (0:00:13.915) 0:00:20.764 ********** 2025-03-10 23:45:43.376348 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:45:43.376416 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:45:43.377844 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:45:43.378537 | orchestrator | 2025-03-10 23:45:43.378869 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-03-10 23:45:43.379829 | orchestrator | Monday 10 March 2025 23:45:43 +0000 (0:00:00.124) 0:00:20.889 ********** 2025-03-10 23:45:51.395398 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:45:51.395692 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:45:51.395761 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:45:51.395799 | orchestrator | 2025-03-10 23:45:51.396162 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-10 23:45:51.398919 | orchestrator | Monday 10 March 2025 23:45:51 +0000 (0:00:08.016) 0:00:28.906 ********** 2025-03-10 23:45:51.798343 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:51.798466 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:51.798816 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:51.799012 | orchestrator | 2025-03-10 23:45:51.801136 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-10 23:45:51.801860 | orchestrator | Monday 10 March 2025 23:45:51 +0000 (0:00:00.405) 0:00:29.312 ********** 2025-03-10 23:45:55.052722 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-03-10 23:45:55.053219 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-03-10 23:45:55.053247 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-03-10 23:45:55.053902 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-03-10 23:45:55.054188 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-03-10 23:45:55.055772 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-03-10 23:45:55.055940 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-03-10 23:45:55.059083 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-03-10 23:45:55.059146 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-03-10 23:45:55.059971 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-03-10 23:45:55.060194 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-03-10 23:45:55.060674 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-03-10 23:45:55.061138 | orchestrator | 2025-03-10 23:45:55.061411 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-10 23:45:55.062174 | orchestrator | Monday 10 March 2025 23:45:55 +0000 (0:00:03.251) 0:00:32.563 ********** 2025-03-10 23:45:56.192444 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:45:56.192815 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:45:56.192869 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:45:56.196946 | orchestrator | 2025-03-10 23:45:56.197477 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-10 23:45:56.198349 | orchestrator | 2025-03-10 23:45:56.200004 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-10 23:45:56.200168 | orchestrator | Monday 10 March 2025 23:45:56 +0000 (0:00:01.140) 0:00:33.704 ********** 2025-03-10 23:46:00.328380 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:00.329051 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:00.329101 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:00.330266 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:00.331390 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:00.332207 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:00.332688 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:00.333526 | orchestrator | 2025-03-10 23:46:00.334113 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:46:00.335123 | orchestrator | 2025-03-10 23:46:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:46:00.335928 | orchestrator | 2025-03-10 23:46:00 | INFO  | Please wait and do not abort execution. 2025-03-10 23:46:00.335967 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:46:00.336213 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:46:00.336882 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:46:00.337383 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:46:00.337836 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:46:00.338695 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:46:00.339297 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:46:00.340342 | orchestrator | 2025-03-10 23:46:00.340578 | orchestrator | 2025-03-10 23:46:00.340995 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:46:00.341547 | orchestrator | Monday 10 March 2025 23:46:00 +0000 (0:00:04.136) 0:00:37.840 ********** 2025-03-10 23:46:00.341941 | orchestrator | =============================================================================== 2025-03-10 23:46:00.342373 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.92s 2025-03-10 23:46:00.342854 | orchestrator | Install required packages (Debian) -------------------------------------- 8.02s 2025-03-10 23:46:00.343321 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.14s 2025-03-10 23:46:00.343888 | orchestrator | Copy fact files --------------------------------------------------------- 3.25s 2025-03-10 23:46:00.344512 | orchestrator | Create custom facts directory ------------------------------------------- 1.48s 2025-03-10 23:46:00.344780 | orchestrator | Copy fact file ---------------------------------------------------------- 1.32s 2025-03-10 23:46:00.345788 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.19s 2025-03-10 23:46:00.345868 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.14s 2025-03-10 23:46:00.345892 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-03-10 23:46:00.346214 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2025-03-10 23:46:00.346555 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-03-10 23:46:00.346754 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-03-10 23:46:00.347319 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-03-10 23:46:00.347603 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-03-10 23:46:00.347914 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-03-10 23:46:00.348285 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2025-03-10 23:46:00.348563 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-03-10 23:46:00.348799 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-03-10 23:46:00.950664 | orchestrator | + osism apply bootstrap 2025-03-10 23:46:02.621729 | orchestrator | 2025-03-10 23:46:02 | INFO  | Task 24b0387a-2c8f-4a67-9981-9ac27a4bba16 (bootstrap) was prepared for execution. 2025-03-10 23:46:06.429429 | orchestrator | 2025-03-10 23:46:02 | INFO  | It takes a moment until task 24b0387a-2c8f-4a67-9981-9ac27a4bba16 (bootstrap) has been started and output is visible here. 2025-03-10 23:46:06.429559 | orchestrator | 2025-03-10 23:46:06.434834 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-03-10 23:46:06.434870 | orchestrator | 2025-03-10 23:46:06.532097 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-03-10 23:46:06.532176 | orchestrator | Monday 10 March 2025 23:46:06 +0000 (0:00:00.139) 0:00:00.139 ********** 2025-03-10 23:46:06.532215 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:06.564873 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:06.596327 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:06.627662 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:06.739096 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:06.740308 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:06.740624 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:06.741176 | orchestrator | 2025-03-10 23:46:06.741648 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-10 23:46:06.742329 | orchestrator | 2025-03-10 23:46:06.742570 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-10 23:46:06.742945 | orchestrator | Monday 10 March 2025 23:46:06 +0000 (0:00:00.315) 0:00:00.455 ********** 2025-03-10 23:46:10.639376 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:10.639604 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:10.640105 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:10.640418 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:10.641043 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:10.641118 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:10.641651 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:10.642169 | orchestrator | 2025-03-10 23:46:10.642505 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-03-10 23:46:10.643310 | orchestrator | 2025-03-10 23:46:10.643845 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-10 23:46:10.645907 | orchestrator | Monday 10 March 2025 23:46:10 +0000 (0:00:03.900) 0:00:04.355 ********** 2025-03-10 23:46:10.734264 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-10 23:46:10.735835 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-10 23:46:10.779862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-03-10 23:46:10.782895 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-10 23:46:10.783967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-10 23:46:10.784110 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-10 23:46:10.851410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-10 23:46:10.852455 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-03-10 23:46:10.852591 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-10 23:46:10.852933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-10 23:46:10.853322 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-10 23:46:10.853970 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-10 23:46:10.854279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-10 23:46:10.854546 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-10 23:46:10.854886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-10 23:46:10.930163 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-10 23:46:10.932513 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-10 23:46:10.932656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-10 23:46:10.933175 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-10 23:46:10.933521 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-03-10 23:46:10.933854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-03-10 23:46:10.934115 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-10 23:46:11.190129 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:11.190644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-10 23:46:11.191483 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:46:11.192357 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-10 23:46:11.193001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-10 23:46:11.193621 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-03-10 23:46:11.194236 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:46:11.195073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-10 23:46:11.195802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-10 23:46:11.196145 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-10 23:46:11.196960 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-10 23:46:11.197301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-10 23:46:11.198157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-10 23:46:11.198990 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-10 23:46:11.199615 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-03-10 23:46:11.200036 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-10 23:46:11.200681 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-10 23:46:11.201122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-10 23:46:11.201834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-10 23:46:11.202510 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-10 23:46:11.202990 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-10 23:46:11.203496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-10 23:46:11.204053 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:46:11.204908 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-10 23:46:11.205374 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-10 23:46:11.206522 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-10 23:46:11.206758 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:46:11.207304 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-10 23:46:11.207516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-10 23:46:11.208405 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:46:11.209162 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-10 23:46:11.209926 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-10 23:46:11.210139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-10 23:46:11.210885 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:46:11.211520 | orchestrator | 2025-03-10 23:46:11.211759 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-03-10 23:46:11.212429 | orchestrator | 2025-03-10 23:46:11.213659 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-03-10 23:46:11.214096 | orchestrator | Monday 10 March 2025 23:46:11 +0000 (0:00:00.550) 0:00:04.906 ********** 2025-03-10 23:46:11.288105 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:11.319535 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:11.352027 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:11.389127 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:11.452192 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:11.453156 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:11.453497 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:11.453997 | orchestrator | 2025-03-10 23:46:11.454391 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-03-10 23:46:11.454710 | orchestrator | Monday 10 March 2025 23:46:11 +0000 (0:00:00.262) 0:00:05.168 ********** 2025-03-10 23:46:12.773653 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:12.774719 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:12.774965 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:12.776017 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:12.779385 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:12.780259 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:12.781075 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:12.781759 | orchestrator | 2025-03-10 23:46:12.782282 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-03-10 23:46:12.783537 | orchestrator | Monday 10 March 2025 23:46:12 +0000 (0:00:01.320) 0:00:06.489 ********** 2025-03-10 23:46:14.275651 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:14.275886 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:14.276487 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:14.276979 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:14.278809 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:14.279437 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:14.279729 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:14.280426 | orchestrator | 2025-03-10 23:46:14.280726 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-03-10 23:46:14.281552 | orchestrator | Monday 10 March 2025 23:46:14 +0000 (0:00:01.501) 0:00:07.991 ********** 2025-03-10 23:46:14.579042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:14.579587 | orchestrator | 2025-03-10 23:46:14.581479 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-03-10 23:46:14.582096 | orchestrator | Monday 10 March 2025 23:46:14 +0000 (0:00:00.303) 0:00:08.294 ********** 2025-03-10 23:46:17.183476 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:17.187611 | orchestrator | changed: [testbed-manager] 2025-03-10 23:46:17.187650 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:17.187673 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:17.187844 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:17.188166 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:17.188912 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:17.189190 | orchestrator | 2025-03-10 23:46:17.189912 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-03-10 23:46:17.190337 | orchestrator | Monday 10 March 2025 23:46:17 +0000 (0:00:02.602) 0:00:10.896 ********** 2025-03-10 23:46:17.272761 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:17.500911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:17.501363 | orchestrator | 2025-03-10 23:46:17.502304 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-03-10 23:46:17.502544 | orchestrator | Monday 10 March 2025 23:46:17 +0000 (0:00:00.319) 0:00:11.216 ********** 2025-03-10 23:46:18.609855 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:18.610468 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:18.611790 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:18.612502 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:18.613714 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:18.614086 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:18.614788 | orchestrator | 2025-03-10 23:46:18.615252 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-03-10 23:46:18.616217 | orchestrator | Monday 10 March 2025 23:46:18 +0000 (0:00:01.107) 0:00:12.323 ********** 2025-03-10 23:46:18.679850 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:19.368951 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:19.369241 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:19.369276 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:19.369864 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:19.370436 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:19.371063 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:19.372044 | orchestrator | 2025-03-10 23:46:19.372306 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-03-10 23:46:19.372844 | orchestrator | Monday 10 March 2025 23:46:19 +0000 (0:00:00.758) 0:00:13.082 ********** 2025-03-10 23:46:19.474115 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:46:19.507360 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:46:19.540872 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:46:19.862800 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:46:19.863739 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:46:19.866152 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:46:19.866914 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:19.868440 | orchestrator | 2025-03-10 23:46:19.869002 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-10 23:46:19.870281 | orchestrator | Monday 10 March 2025 23:46:19 +0000 (0:00:00.496) 0:00:13.578 ********** 2025-03-10 23:46:19.951115 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:19.981910 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:46:20.007016 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:46:20.049896 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:46:20.122568 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:46:20.123076 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:46:20.123554 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:46:20.124553 | orchestrator | 2025-03-10 23:46:20.125384 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-10 23:46:20.127044 | orchestrator | Monday 10 March 2025 23:46:20 +0000 (0:00:00.260) 0:00:13.838 ********** 2025-03-10 23:46:20.495082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:20.496018 | orchestrator | 2025-03-10 23:46:20.496524 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-10 23:46:20.497361 | orchestrator | Monday 10 March 2025 23:46:20 +0000 (0:00:00.370) 0:00:14.209 ********** 2025-03-10 23:46:20.845312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:20.846873 | orchestrator | 2025-03-10 23:46:20.847967 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-10 23:46:20.848952 | orchestrator | Monday 10 March 2025 23:46:20 +0000 (0:00:00.350) 0:00:14.560 ********** 2025-03-10 23:46:22.262936 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:22.263117 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:22.263157 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:22.263486 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:22.263517 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:22.264329 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:22.265317 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:22.266266 | orchestrator | 2025-03-10 23:46:22.266553 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-10 23:46:22.266959 | orchestrator | Monday 10 March 2025 23:46:22 +0000 (0:00:01.415) 0:00:15.976 ********** 2025-03-10 23:46:22.351297 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:22.378878 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:46:22.410899 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:46:22.443650 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:46:22.520948 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:46:22.521231 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:46:22.521881 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:46:22.522908 | orchestrator | 2025-03-10 23:46:22.523372 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-10 23:46:22.524065 | orchestrator | Monday 10 March 2025 23:46:22 +0000 (0:00:00.258) 0:00:16.234 ********** 2025-03-10 23:46:23.131158 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:23.132916 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:23.132954 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:23.133768 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:23.137391 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:23.137934 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:23.137961 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:23.138007 | orchestrator | 2025-03-10 23:46:23.138074 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-10 23:46:23.138477 | orchestrator | Monday 10 March 2025 23:46:23 +0000 (0:00:00.611) 0:00:16.845 ********** 2025-03-10 23:46:23.241574 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:23.277200 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:46:23.310868 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:46:23.342231 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:46:23.448369 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:46:23.448952 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:46:23.448981 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:46:23.449002 | orchestrator | 2025-03-10 23:46:23.449230 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-10 23:46:23.449263 | orchestrator | Monday 10 March 2025 23:46:23 +0000 (0:00:00.317) 0:00:17.163 ********** 2025-03-10 23:46:24.052295 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:24.052501 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:24.053187 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:24.053523 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:24.053897 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:24.054210 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:24.054451 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:24.054905 | orchestrator | 2025-03-10 23:46:24.055332 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-10 23:46:24.055533 | orchestrator | Monday 10 March 2025 23:46:24 +0000 (0:00:00.604) 0:00:17.767 ********** 2025-03-10 23:46:25.234097 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:25.235167 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:25.236141 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:25.237122 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:25.237758 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:25.238305 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:25.239592 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:25.240330 | orchestrator | 2025-03-10 23:46:25.241265 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-10 23:46:25.242395 | orchestrator | Monday 10 March 2025 23:46:25 +0000 (0:00:01.180) 0:00:18.947 ********** 2025-03-10 23:46:26.714715 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:26.714861 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:26.714967 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:26.716346 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:26.716737 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:26.717229 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:26.717581 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:26.719339 | orchestrator | 2025-03-10 23:46:26.719864 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-10 23:46:26.720089 | orchestrator | Monday 10 March 2025 23:46:26 +0000 (0:00:01.480) 0:00:20.428 ********** 2025-03-10 23:46:27.087596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:27.091405 | orchestrator | 2025-03-10 23:46:27.091458 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-10 23:46:27.093880 | orchestrator | Monday 10 March 2025 23:46:27 +0000 (0:00:00.367) 0:00:20.796 ********** 2025-03-10 23:46:27.168669 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:28.510293 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:28.510494 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:28.511321 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:28.512080 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:28.512613 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:28.513197 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:28.513844 | orchestrator | 2025-03-10 23:46:28.514330 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-10 23:46:28.514962 | orchestrator | Monday 10 March 2025 23:46:28 +0000 (0:00:01.428) 0:00:22.224 ********** 2025-03-10 23:46:28.592391 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:28.625410 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:28.651940 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:28.685826 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:28.784534 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:28.785914 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:28.786922 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:28.787038 | orchestrator | 2025-03-10 23:46:28.787504 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-10 23:46:28.787808 | orchestrator | Monday 10 March 2025 23:46:28 +0000 (0:00:00.275) 0:00:22.499 ********** 2025-03-10 23:46:28.900118 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:28.947169 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:28.975698 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:29.008661 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:29.093419 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:29.094229 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:29.094889 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:29.095665 | orchestrator | 2025-03-10 23:46:29.099024 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-10 23:46:29.191939 | orchestrator | Monday 10 March 2025 23:46:29 +0000 (0:00:00.309) 0:00:22.809 ********** 2025-03-10 23:46:29.191984 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:29.223951 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:29.253463 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:29.284463 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:29.363447 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:29.365383 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:29.365992 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:29.367296 | orchestrator | 2025-03-10 23:46:29.368197 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-10 23:46:29.721983 | orchestrator | Monday 10 March 2025 23:46:29 +0000 (0:00:00.270) 0:00:23.079 ********** 2025-03-10 23:46:29.722159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:29.722605 | orchestrator | 2025-03-10 23:46:29.725548 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-10 23:46:30.278458 | orchestrator | Monday 10 March 2025 23:46:29 +0000 (0:00:00.356) 0:00:23.436 ********** 2025-03-10 23:46:30.278627 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:30.278745 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:30.280177 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:30.281818 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:30.282195 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:30.283315 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:30.283990 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:30.284781 | orchestrator | 2025-03-10 23:46:30.285523 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-10 23:46:30.287108 | orchestrator | Monday 10 March 2025 23:46:30 +0000 (0:00:00.555) 0:00:23.991 ********** 2025-03-10 23:46:30.375796 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:30.417119 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:46:30.451121 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:46:30.489279 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:46:30.569002 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:46:30.569237 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:46:30.570747 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:46:30.571000 | orchestrator | 2025-03-10 23:46:30.571781 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-10 23:46:30.572333 | orchestrator | Monday 10 March 2025 23:46:30 +0000 (0:00:00.293) 0:00:24.285 ********** 2025-03-10 23:46:31.661524 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:31.661712 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:31.662325 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:31.662882 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:31.663304 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:31.664109 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:31.664332 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:31.665501 | orchestrator | 2025-03-10 23:46:31.665929 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-10 23:46:31.666631 | orchestrator | Monday 10 March 2025 23:46:31 +0000 (0:00:01.091) 0:00:25.376 ********** 2025-03-10 23:46:32.303892 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:32.304125 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:32.304747 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:32.305168 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:32.305765 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:32.306466 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:32.307000 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:32.307717 | orchestrator | 2025-03-10 23:46:32.308447 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-10 23:46:32.310101 | orchestrator | Monday 10 March 2025 23:46:32 +0000 (0:00:00.641) 0:00:26.018 ********** 2025-03-10 23:46:33.389755 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:33.389955 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:33.390265 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:33.390885 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:33.391443 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:33.391922 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:33.392284 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:33.392633 | orchestrator | 2025-03-10 23:46:33.393153 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-10 23:46:33.393419 | orchestrator | Monday 10 March 2025 23:46:33 +0000 (0:00:01.085) 0:00:27.103 ********** 2025-03-10 23:46:47.225462 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:47.226431 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:47.226687 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:47.227469 | orchestrator | changed: [testbed-manager] 2025-03-10 23:46:47.228379 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:47.229308 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:47.230634 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:47.231113 | orchestrator | 2025-03-10 23:46:47.231315 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-03-10 23:46:47.232354 | orchestrator | Monday 10 March 2025 23:46:47 +0000 (0:00:13.831) 0:00:40.935 ********** 2025-03-10 23:46:47.321338 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:47.357129 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:47.387055 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:47.428917 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:47.523537 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:47.524010 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:47.525096 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:47.525470 | orchestrator | 2025-03-10 23:46:47.525504 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-03-10 23:46:47.525899 | orchestrator | Monday 10 March 2025 23:46:47 +0000 (0:00:00.302) 0:00:41.237 ********** 2025-03-10 23:46:47.607523 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:47.638350 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:47.666923 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:47.693719 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:47.762961 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:47.763916 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:47.763954 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:47.765461 | orchestrator | 2025-03-10 23:46:47.766431 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-03-10 23:46:47.767517 | orchestrator | Monday 10 March 2025 23:46:47 +0000 (0:00:00.240) 0:00:41.478 ********** 2025-03-10 23:46:47.866204 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:47.894722 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:47.927170 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:47.954965 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:48.033538 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:48.034109 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:48.034773 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:48.036033 | orchestrator | 2025-03-10 23:46:48.040157 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-03-10 23:46:48.380552 | orchestrator | Monday 10 March 2025 23:46:48 +0000 (0:00:00.269) 0:00:41.748 ********** 2025-03-10 23:46:48.380723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:48.381738 | orchestrator | 2025-03-10 23:46:48.382848 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-03-10 23:46:48.383256 | orchestrator | Monday 10 March 2025 23:46:48 +0000 (0:00:00.346) 0:00:42.094 ********** 2025-03-10 23:46:50.148169 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:50.148395 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:50.148897 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:50.151036 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:50.151869 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:50.153395 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:50.155299 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:50.156099 | orchestrator | 2025-03-10 23:46:50.156856 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-03-10 23:46:50.157797 | orchestrator | Monday 10 March 2025 23:46:50 +0000 (0:00:01.767) 0:00:43.862 ********** 2025-03-10 23:46:51.243196 | orchestrator | changed: [testbed-manager] 2025-03-10 23:46:51.244405 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:51.246980 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:51.247885 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:51.247905 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:51.247913 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:51.247922 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:51.249058 | orchestrator | 2025-03-10 23:46:51.249087 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-03-10 23:46:51.249841 | orchestrator | Monday 10 March 2025 23:46:51 +0000 (0:00:01.095) 0:00:44.957 ********** 2025-03-10 23:46:52.086389 | orchestrator | ok: [testbed-manager] 2025-03-10 23:46:52.086552 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:46:52.087278 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:46:52.088266 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:46:52.089220 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:46:52.089772 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:46:52.089942 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:46:52.090985 | orchestrator | 2025-03-10 23:46:52.091257 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-03-10 23:46:52.091935 | orchestrator | Monday 10 March 2025 23:46:52 +0000 (0:00:00.841) 0:00:45.798 ********** 2025-03-10 23:46:52.464937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:46:52.465972 | orchestrator | 2025-03-10 23:46:52.466890 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-03-10 23:46:52.468020 | orchestrator | Monday 10 March 2025 23:46:52 +0000 (0:00:00.379) 0:00:46.178 ********** 2025-03-10 23:46:53.576476 | orchestrator | changed: [testbed-manager] 2025-03-10 23:46:53.577092 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:46:53.578293 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:46:53.579270 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:46:53.580380 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:46:53.581939 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:46:53.583137 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:46:53.583575 | orchestrator | 2025-03-10 23:46:53.584358 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-03-10 23:46:53.585071 | orchestrator | Monday 10 March 2025 23:46:53 +0000 (0:00:01.112) 0:00:47.290 ********** 2025-03-10 23:46:53.686868 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:46:53.732248 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:46:53.755026 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:46:53.945262 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:46:53.946159 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:46:53.946200 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:46:53.946575 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:46:53.947518 | orchestrator | 2025-03-10 23:46:53.947771 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-03-10 23:46:53.948244 | orchestrator | Monday 10 March 2025 23:46:53 +0000 (0:00:00.368) 0:00:47.659 ********** 2025-03-10 23:47:07.462627 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:47:07.463900 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:47:07.463943 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:47:07.465365 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:47:07.465786 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:47:07.466776 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:47:07.467346 | orchestrator | changed: [testbed-manager] 2025-03-10 23:47:07.467478 | orchestrator | 2025-03-10 23:47:07.468347 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-03-10 23:47:07.468956 | orchestrator | Monday 10 March 2025 23:47:07 +0000 (0:00:13.512) 0:01:01.172 ********** 2025-03-10 23:47:08.700080 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:47:08.701016 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:47:08.702771 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:47:08.703946 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:47:08.705222 | orchestrator | ok: [testbed-manager] 2025-03-10 23:47:08.705958 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:47:08.706838 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:08.707687 | orchestrator | 2025-03-10 23:47:08.708611 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-03-10 23:47:08.709347 | orchestrator | Monday 10 March 2025 23:47:08 +0000 (0:00:01.243) 0:01:02.415 ********** 2025-03-10 23:47:09.774586 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:47:09.775128 | orchestrator | ok: [testbed-manager] 2025-03-10 23:47:09.775177 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:47:09.776580 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:47:09.777163 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:47:09.778180 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:09.779060 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:47:09.780059 | orchestrator | 2025-03-10 23:47:09.780946 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-03-10 23:47:09.781502 | orchestrator | Monday 10 March 2025 23:47:09 +0000 (0:00:01.072) 0:01:03.487 ********** 2025-03-10 23:47:09.849891 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-03-10 23:47:09.877033 | orchestrator | ok: [testbed-manager] 2025-03-10 23:47:09.914827 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:47:09.950904 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:47:09.986940 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:47:10.055163 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:47:10.055698 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:47:10.055734 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:10.056817 | orchestrator | 2025-03-10 23:47:10.057319 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-03-10 23:47:10.058071 | orchestrator | Monday 10 March 2025 23:47:10 +0000 (0:00:00.283) 0:01:03.771 ********** 2025-03-10 23:47:10.161912 | orchestrator | ok: [testbed-manager] 2025-03-10 23:47:10.189675 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:47:10.219571 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:47:10.256583 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:47:10.326563 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:47:10.326745 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:47:10.327887 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:10.328317 | orchestrator | 2025-03-10 23:47:10.329073 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-03-10 23:47:10.330090 | orchestrator | Monday 10 March 2025 23:47:10 +0000 (0:00:00.270) 0:01:04.042 ********** 2025-03-10 23:47:10.708287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:47:10.708520 | orchestrator | 2025-03-10 23:47:10.708549 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-03-10 23:47:10.708570 | orchestrator | Monday 10 March 2025 23:47:10 +0000 (0:00:00.377) 0:01:04.419 ********** 2025-03-10 23:47:12.262388 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:47:12.262866 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:47:12.263726 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:47:12.264615 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:47:12.264990 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:12.265733 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:47:12.266340 | orchestrator | ok: [testbed-manager] 2025-03-10 23:47:12.267091 | orchestrator | 2025-03-10 23:47:12.267623 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-03-10 23:47:12.268080 | orchestrator | Monday 10 March 2025 23:47:12 +0000 (0:00:01.554) 0:01:05.974 ********** 2025-03-10 23:47:12.965043 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:47:12.965832 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:47:12.965875 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:47:12.966299 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:47:12.966742 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:47:12.968155 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:47:12.968580 | orchestrator | changed: [testbed-manager] 2025-03-10 23:47:12.968739 | orchestrator | 2025-03-10 23:47:12.969275 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-03-10 23:47:12.969927 | orchestrator | Monday 10 March 2025 23:47:12 +0000 (0:00:00.704) 0:01:06.678 ********** 2025-03-10 23:47:13.062156 | orchestrator | ok: [testbed-manager] 2025-03-10 23:47:13.094986 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:47:13.136054 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:47:13.182306 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:47:13.267941 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:47:13.268247 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:47:13.268573 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:13.269089 | orchestrator | 2025-03-10 23:47:13.269228 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-03-10 23:47:13.269939 | orchestrator | Monday 10 March 2025 23:47:13 +0000 (0:00:00.304) 0:01:06.983 ********** 2025-03-10 23:47:14.484020 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:47:14.484165 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:47:14.484191 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:47:14.484592 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:47:14.485561 | orchestrator | ok: [testbed-manager] 2025-03-10 23:47:14.486144 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:14.486766 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:47:14.487695 | orchestrator | 2025-03-10 23:47:14.488235 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-03-10 23:47:14.488346 | orchestrator | Monday 10 March 2025 23:47:14 +0000 (0:00:01.214) 0:01:08.197 ********** 2025-03-10 23:47:17.427016 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:47:17.428312 | orchestrator | changed: [testbed-manager] 2025-03-10 23:47:17.428389 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:47:17.428413 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:47:17.428675 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:47:17.428887 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:47:17.429005 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:47:17.429555 | orchestrator | 2025-03-10 23:47:17.429790 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-03-10 23:47:17.429888 | orchestrator | Monday 10 March 2025 23:47:17 +0000 (0:00:02.943) 0:01:11.141 ********** 2025-03-10 23:47:34.818986 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:48:09.362367 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:48:09.362502 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:48:09.362521 | orchestrator | ok: [testbed-manager] 2025-03-10 23:48:09.362536 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:48:09.362551 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:48:09.362565 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:48:09.362616 | orchestrator | 2025-03-10 23:48:09.362632 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-03-10 23:48:09.362647 | orchestrator | Monday 10 March 2025 23:47:34 +0000 (0:00:17.385) 0:01:28.526 ********** 2025-03-10 23:48:09.362679 | orchestrator | ok: [testbed-manager] 2025-03-10 23:48:09.363172 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:48:09.363202 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:48:09.363219 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:48:09.363243 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:49:27.365260 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:49:27.365402 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:49:27.365423 | orchestrator | 2025-03-10 23:49:27.365439 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-03-10 23:49:27.365455 | orchestrator | Monday 10 March 2025 23:48:09 +0000 (0:00:34.541) 0:02:03.068 ********** 2025-03-10 23:49:27.365523 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:27.365642 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:49:27.365666 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:49:27.365680 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:49:27.365694 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:49:27.365708 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:49:27.365728 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:49:27.366319 | orchestrator | 2025-03-10 23:49:27.367076 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-03-10 23:49:27.367269 | orchestrator | Monday 10 March 2025 23:49:27 +0000 (0:01:18.003) 0:03:21.071 ********** 2025-03-10 23:49:29.194937 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:49:29.195804 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:29.195848 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:49:29.196354 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:49:29.196411 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:49:29.196432 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:49:29.197216 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:49:29.198096 | orchestrator | 2025-03-10 23:49:29.198187 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-03-10 23:49:29.198976 | orchestrator | Monday 10 March 2025 23:49:29 +0000 (0:00:01.837) 0:03:22.909 ********** 2025-03-10 23:49:42.534854 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:49:42.536226 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:49:42.536272 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:49:42.538135 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:49:42.538164 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:49:42.538178 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:49:42.538198 | orchestrator | changed: [testbed-manager] 2025-03-10 23:49:42.538887 | orchestrator | 2025-03-10 23:49:42.539380 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-03-10 23:49:42.539905 | orchestrator | Monday 10 March 2025 23:49:42 +0000 (0:00:13.336) 0:03:36.246 ********** 2025-03-10 23:49:42.984763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-03-10 23:49:42.985288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-03-10 23:49:42.985675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-03-10 23:49:42.986009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-03-10 23:49:42.986248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-03-10 23:49:42.986751 | orchestrator | 2025-03-10 23:49:42.988805 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-03-10 23:49:42.994779 | orchestrator | Monday 10 March 2025 23:49:42 +0000 (0:00:00.454) 0:03:36.700 ********** 2025-03-10 23:49:43.052404 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-10 23:49:43.092822 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:43.199756 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-10 23:49:43.718637 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-10 23:49:43.718751 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:49:43.720686 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:49:43.720715 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-10 23:49:43.720731 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:49:43.720746 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-10 23:49:43.720778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-10 23:49:43.720794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-10 23:49:43.720815 | orchestrator | 2025-03-10 23:49:43.721049 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-03-10 23:49:43.721618 | orchestrator | Monday 10 March 2025 23:49:43 +0000 (0:00:00.730) 0:03:37.430 ********** 2025-03-10 23:49:43.792655 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-10 23:49:43.793457 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-10 23:49:43.793628 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-10 23:49:43.796015 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-10 23:49:43.798861 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-10 23:49:43.837538 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-10 23:49:43.837578 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-10 23:49:43.837594 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-10 23:49:43.837616 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-10 23:49:43.872985 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-10 23:49:43.873056 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:43.936867 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-10 23:49:43.937748 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-10 23:49:43.938402 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-10 23:49:43.938431 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-10 23:49:43.938755 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-10 23:49:43.939039 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-10 23:49:47.597303 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-10 23:49:47.597658 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-10 23:49:47.597696 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-10 23:49:47.597720 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-10 23:49:47.598147 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-10 23:49:47.598419 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-10 23:49:47.600590 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-10 23:49:47.601717 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-10 23:49:47.602753 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-10 23:49:47.603279 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-10 23:49:47.604949 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:49:47.605445 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-10 23:49:47.606332 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-10 23:49:47.607060 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-10 23:49:47.607665 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-10 23:49:47.608607 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:49:47.609670 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-10 23:49:47.610221 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-10 23:49:47.610970 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-10 23:49:47.611773 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-10 23:49:47.611858 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-10 23:49:47.612920 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-10 23:49:47.613102 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-10 23:49:47.613825 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-10 23:49:47.614641 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-10 23:49:47.614860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-10 23:49:47.615458 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:49:47.616070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-10 23:49:47.616638 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-10 23:49:47.617249 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-10 23:49:47.617990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-10 23:49:47.618673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-10 23:49:47.619454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-10 23:49:47.620118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-10 23:49:47.620426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-10 23:49:47.621306 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-10 23:49:47.621894 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-10 23:49:47.622932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-10 23:49:47.623267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-10 23:49:47.623856 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-10 23:49:47.624543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-10 23:49:47.625387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-10 23:49:47.625904 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-10 23:49:47.626674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-10 23:49:47.626995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-10 23:49:47.627627 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-10 23:49:47.628092 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-10 23:49:47.628623 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-10 23:49:47.629515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-10 23:49:47.629629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-10 23:49:47.629970 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-10 23:49:47.630326 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-10 23:49:47.630975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-10 23:49:47.631340 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-10 23:49:47.631680 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-10 23:49:47.632047 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-10 23:49:47.632606 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-10 23:49:47.632928 | orchestrator | 2025-03-10 23:49:47.633233 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-03-10 23:49:47.635998 | orchestrator | Monday 10 March 2025 23:49:47 +0000 (0:00:03.880) 0:03:41.310 ********** 2025-03-10 23:49:49.207964 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-10 23:49:49.208138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-10 23:49:49.208405 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-10 23:49:49.208438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-10 23:49:49.209161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-10 23:49:49.211409 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-10 23:49:49.212175 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-10 23:49:49.213050 | orchestrator | 2025-03-10 23:49:49.214202 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-03-10 23:49:49.215317 | orchestrator | Monday 10 March 2025 23:49:49 +0000 (0:00:01.610) 0:03:42.921 ********** 2025-03-10 23:49:49.275923 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-10 23:49:49.313018 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-10 23:49:49.313150 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:49.360401 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-10 23:49:49.360527 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:49:49.361104 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-10 23:49:49.400315 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:49:49.434390 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:49:49.832125 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-10 23:49:49.832992 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-10 23:49:49.833446 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-10 23:49:49.833709 | orchestrator | 2025-03-10 23:49:49.835140 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-03-10 23:49:49.835976 | orchestrator | Monday 10 March 2025 23:49:49 +0000 (0:00:00.625) 0:03:43.546 ********** 2025-03-10 23:49:49.905175 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-10 23:49:49.905679 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-10 23:49:49.948012 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:49.980693 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-10 23:49:49.980931 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:49:49.981802 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-10 23:49:50.019825 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:49:50.053546 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:49:50.607777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-10 23:49:50.608683 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-10 23:49:50.609291 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-10 23:49:50.610066 | orchestrator | 2025-03-10 23:49:50.610561 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-03-10 23:49:50.611265 | orchestrator | Monday 10 March 2025 23:49:50 +0000 (0:00:00.776) 0:03:44.323 ********** 2025-03-10 23:49:50.676238 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:50.714349 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:49:50.743995 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:49:50.807457 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:49:51.005984 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:49:51.006599 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:49:51.007378 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:49:51.008045 | orchestrator | 2025-03-10 23:49:51.008700 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-03-10 23:49:51.009159 | orchestrator | Monday 10 March 2025 23:49:51 +0000 (0:00:00.395) 0:03:44.719 ********** 2025-03-10 23:49:57.143627 | orchestrator | ok: [testbed-manager] 2025-03-10 23:49:57.144130 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:49:57.145168 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:49:57.146242 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:49:57.148779 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:49:57.148872 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:49:57.148896 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:49:57.149282 | orchestrator | 2025-03-10 23:49:57.149716 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-03-10 23:49:57.150085 | orchestrator | Monday 10 March 2025 23:49:57 +0000 (0:00:06.138) 0:03:50.858 ********** 2025-03-10 23:49:57.233278 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-03-10 23:49:57.280252 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-03-10 23:49:57.280318 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:49:57.281074 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-03-10 23:49:57.330764 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:49:57.334756 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-03-10 23:49:57.382858 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:49:57.383408 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-03-10 23:49:57.422131 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:49:57.496942 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:49:57.498734 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-03-10 23:49:57.499367 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:49:57.502660 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-03-10 23:49:58.676121 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:49:58.676275 | orchestrator | 2025-03-10 23:49:58.676296 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-03-10 23:49:58.676313 | orchestrator | Monday 10 March 2025 23:49:57 +0000 (0:00:00.353) 0:03:51.212 ********** 2025-03-10 23:49:58.676344 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-03-10 23:49:58.676416 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-03-10 23:49:58.677273 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-03-10 23:49:58.680525 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-03-10 23:49:58.681163 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-03-10 23:49:58.682053 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-03-10 23:49:58.682881 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-03-10 23:49:58.684071 | orchestrator | 2025-03-10 23:49:58.684753 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-03-10 23:49:58.685447 | orchestrator | Monday 10 March 2025 23:49:58 +0000 (0:00:01.178) 0:03:52.390 ********** 2025-03-10 23:49:59.297152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:49:59.297996 | orchestrator | 2025-03-10 23:49:59.298402 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-03-10 23:49:59.301900 | orchestrator | Monday 10 March 2025 23:49:59 +0000 (0:00:00.621) 0:03:53.012 ********** 2025-03-10 23:50:00.549812 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:00.550178 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:00.550353 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:00.550759 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:00.551130 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:00.551615 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:00.553064 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:00.553157 | orchestrator | 2025-03-10 23:50:00.553585 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-03-10 23:50:00.554640 | orchestrator | Monday 10 March 2025 23:50:00 +0000 (0:00:01.252) 0:03:54.264 ********** 2025-03-10 23:50:01.202520 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:01.203010 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:01.203068 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:01.203105 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:01.203764 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:01.204355 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:01.205025 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:01.205789 | orchestrator | 2025-03-10 23:50:01.206300 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-03-10 23:50:01.207320 | orchestrator | Monday 10 March 2025 23:50:01 +0000 (0:00:00.651) 0:03:54.916 ********** 2025-03-10 23:50:01.883608 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:01.884148 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:01.884192 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:01.884256 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:01.885319 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:01.885933 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:01.886496 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:01.887210 | orchestrator | 2025-03-10 23:50:01.887591 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-03-10 23:50:01.887952 | orchestrator | Monday 10 March 2025 23:50:01 +0000 (0:00:00.679) 0:03:55.596 ********** 2025-03-10 23:50:02.536178 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:02.536601 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:02.537069 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:02.537801 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:02.538085 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:02.538557 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:02.538889 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:02.540124 | orchestrator | 2025-03-10 23:50:02.540598 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-03-10 23:50:02.541091 | orchestrator | Monday 10 March 2025 23:50:02 +0000 (0:00:00.655) 0:03:56.251 ********** 2025-03-10 23:50:03.509981 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741648853.0830288, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.511922 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741648851.0088255, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.512564 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741648857.9367807, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.513504 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741648839.3296177, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.514268 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741648865.2297206, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.516246 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741648847.1299474, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.517730 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1741648854.4767928, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.518964 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741648789.688499, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.519984 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741648782.8906965, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.520959 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741648785.5500588, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.521791 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741648867.341206, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.523298 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741648776.4963868, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.523684 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741648787.8577182, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.524894 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1741648794.0130267, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-10 23:50:03.525666 | orchestrator | 2025-03-10 23:50:03.526434 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-03-10 23:50:03.527184 | orchestrator | Monday 10 March 2025 23:50:03 +0000 (0:00:00.973) 0:03:57.224 ********** 2025-03-10 23:50:04.696136 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:04.697399 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:04.698810 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:04.700987 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:04.702129 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:04.703361 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:04.704044 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:04.704892 | orchestrator | 2025-03-10 23:50:04.705687 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-03-10 23:50:04.706702 | orchestrator | Monday 10 March 2025 23:50:04 +0000 (0:00:01.183) 0:03:58.408 ********** 2025-03-10 23:50:05.908543 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:05.908920 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:05.909646 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:05.910552 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:05.910931 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:05.911587 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:05.912491 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:05.912854 | orchestrator | 2025-03-10 23:50:05.913499 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-03-10 23:50:05.914690 | orchestrator | Monday 10 March 2025 23:50:05 +0000 (0:00:01.215) 0:03:59.624 ********** 2025-03-10 23:50:07.257619 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:07.259131 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:07.260651 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:07.260684 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:07.262622 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:07.263871 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:07.265229 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:07.265947 | orchestrator | 2025-03-10 23:50:07.266995 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-03-10 23:50:07.267846 | orchestrator | Monday 10 March 2025 23:50:07 +0000 (0:00:01.346) 0:04:00.970 ********** 2025-03-10 23:50:07.337302 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:07.382991 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:50:07.426003 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:50:07.460973 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:50:07.501668 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:50:07.568958 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:50:07.570059 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:50:07.570097 | orchestrator | 2025-03-10 23:50:07.571568 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-03-10 23:50:07.574813 | orchestrator | Monday 10 March 2025 23:50:07 +0000 (0:00:00.312) 0:04:01.283 ********** 2025-03-10 23:50:08.437497 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:08.437905 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:08.440299 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:08.441174 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:08.441206 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:08.442000 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:08.442760 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:08.443687 | orchestrator | 2025-03-10 23:50:08.443871 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-03-10 23:50:08.444421 | orchestrator | Monday 10 March 2025 23:50:08 +0000 (0:00:00.867) 0:04:02.151 ********** 2025-03-10 23:50:08.923722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:50:08.924653 | orchestrator | 2025-03-10 23:50:08.924706 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-03-10 23:50:08.926373 | orchestrator | Monday 10 March 2025 23:50:08 +0000 (0:00:00.481) 0:04:02.633 ********** 2025-03-10 23:50:17.232869 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:17.233625 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:17.233670 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:17.234945 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:17.236036 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:17.238591 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:17.239146 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:17.239845 | orchestrator | 2025-03-10 23:50:17.240413 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-03-10 23:50:17.241035 | orchestrator | Monday 10 March 2025 23:50:17 +0000 (0:00:08.314) 0:04:10.947 ********** 2025-03-10 23:50:18.526919 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:18.527754 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:18.528047 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:18.529298 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:18.529661 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:18.530279 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:18.531102 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:18.531709 | orchestrator | 2025-03-10 23:50:18.532175 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-03-10 23:50:18.533038 | orchestrator | Monday 10 March 2025 23:50:18 +0000 (0:00:01.293) 0:04:12.240 ********** 2025-03-10 23:50:19.609106 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:19.609247 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:19.609763 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:19.610848 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:19.611523 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:19.612247 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:19.612789 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:19.613596 | orchestrator | 2025-03-10 23:50:19.613925 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-03-10 23:50:19.614664 | orchestrator | Monday 10 March 2025 23:50:19 +0000 (0:00:01.082) 0:04:13.323 ********** 2025-03-10 23:50:20.219277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:50:20.219496 | orchestrator | 2025-03-10 23:50:20.220206 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-03-10 23:50:20.220963 | orchestrator | Monday 10 March 2025 23:50:20 +0000 (0:00:00.611) 0:04:13.935 ********** 2025-03-10 23:50:29.210776 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:29.210991 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:29.211023 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:29.211748 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:29.212223 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:29.213330 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:29.213590 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:29.214742 | orchestrator | 2025-03-10 23:50:29.215490 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-03-10 23:50:29.215884 | orchestrator | Monday 10 March 2025 23:50:29 +0000 (0:00:08.989) 0:04:22.924 ********** 2025-03-10 23:50:29.860827 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:29.861652 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:29.861689 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:29.862555 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:29.863502 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:29.864169 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:29.864670 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:29.865651 | orchestrator | 2025-03-10 23:50:29.866830 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-03-10 23:50:29.867012 | orchestrator | Monday 10 March 2025 23:50:29 +0000 (0:00:00.646) 0:04:23.571 ********** 2025-03-10 23:50:31.115641 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:31.117113 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:31.117151 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:31.117176 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:31.117948 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:31.118331 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:31.119667 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:31.120021 | orchestrator | 2025-03-10 23:50:31.120642 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-03-10 23:50:31.121680 | orchestrator | Monday 10 March 2025 23:50:31 +0000 (0:00:01.255) 0:04:24.827 ********** 2025-03-10 23:50:32.231009 | orchestrator | changed: [testbed-manager] 2025-03-10 23:50:32.231803 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:50:32.231846 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:50:32.233297 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:50:32.236212 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:50:32.236624 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:50:32.236660 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:50:32.236684 | orchestrator | 2025-03-10 23:50:32.237245 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-03-10 23:50:32.238276 | orchestrator | Monday 10 March 2025 23:50:32 +0000 (0:00:01.116) 0:04:25.943 ********** 2025-03-10 23:50:32.380066 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:32.418903 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:32.461309 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:32.503703 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:32.578354 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:32.578541 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:32.578616 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:32.580135 | orchestrator | 2025-03-10 23:50:32.580726 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-03-10 23:50:32.581278 | orchestrator | Monday 10 March 2025 23:50:32 +0000 (0:00:00.347) 0:04:26.291 ********** 2025-03-10 23:50:32.701192 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:32.736612 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:32.783601 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:32.817743 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:32.925631 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:32.926565 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:32.927633 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:32.931087 | orchestrator | 2025-03-10 23:50:32.932120 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-03-10 23:50:32.932560 | orchestrator | Monday 10 March 2025 23:50:32 +0000 (0:00:00.350) 0:04:26.641 ********** 2025-03-10 23:50:33.067487 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:33.110260 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:33.146350 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:33.184179 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:33.267086 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:33.267717 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:33.268477 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:33.268899 | orchestrator | 2025-03-10 23:50:33.269530 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-03-10 23:50:33.269940 | orchestrator | Monday 10 March 2025 23:50:33 +0000 (0:00:00.342) 0:04:26.983 ********** 2025-03-10 23:50:39.168128 | orchestrator | ok: [testbed-manager] 2025-03-10 23:50:39.168301 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:50:39.168549 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:50:39.169349 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:50:39.170682 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:50:39.172175 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:50:39.173251 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:50:39.173804 | orchestrator | 2025-03-10 23:50:39.175169 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-03-10 23:50:39.175721 | orchestrator | Monday 10 March 2025 23:50:39 +0000 (0:00:05.898) 0:04:32.882 ********** 2025-03-10 23:50:39.685348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:50:39.685640 | orchestrator | 2025-03-10 23:50:39.686220 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-03-10 23:50:39.686593 | orchestrator | Monday 10 March 2025 23:50:39 +0000 (0:00:00.516) 0:04:33.399 ********** 2025-03-10 23:50:39.734890 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-03-10 23:50:39.781996 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-03-10 23:50:39.830130 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-03-10 23:50:39.830186 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-03-10 23:50:39.830210 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:39.881078 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-03-10 23:50:39.881125 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-03-10 23:50:39.881697 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:50:39.881821 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-03-10 23:50:39.882240 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-03-10 23:50:39.927197 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:50:39.927648 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-03-10 23:50:39.928636 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-03-10 23:50:39.970587 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:50:39.970962 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-03-10 23:50:40.066879 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:50:40.067599 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-03-10 23:50:40.068941 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:50:40.069716 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-03-10 23:50:40.070730 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-03-10 23:50:40.071354 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:50:40.072492 | orchestrator | 2025-03-10 23:50:40.073170 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-03-10 23:50:40.074065 | orchestrator | Monday 10 March 2025 23:50:40 +0000 (0:00:00.384) 0:04:33.783 ********** 2025-03-10 23:50:40.622616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:50:40.622774 | orchestrator | 2025-03-10 23:50:40.622796 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-03-10 23:50:40.622835 | orchestrator | Monday 10 March 2025 23:50:40 +0000 (0:00:00.551) 0:04:34.335 ********** 2025-03-10 23:50:40.705849 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-03-10 23:50:40.708591 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-03-10 23:50:40.759878 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:50:40.760327 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-03-10 23:50:40.802781 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:50:40.883897 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:50:40.886163 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-03-10 23:50:40.895348 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-03-10 23:50:40.940217 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:50:41.016279 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:50:41.016497 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-03-10 23:50:41.017674 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:50:41.018557 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-03-10 23:50:41.020989 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:50:41.021832 | orchestrator | 2025-03-10 23:50:41.021888 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-03-10 23:50:41.022157 | orchestrator | Monday 10 March 2025 23:50:41 +0000 (0:00:00.397) 0:04:34.732 ********** 2025-03-10 23:50:41.658484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:50:41.663369 | orchestrator | 2025-03-10 23:50:41.664635 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-03-10 23:50:41.665388 | orchestrator | Monday 10 March 2025 23:50:41 +0000 (0:00:00.639) 0:04:35.372 ********** 2025-03-10 23:51:16.357733 | orchestrator | changed: [testbed-manager] 2025-03-10 23:51:16.357995 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:51:16.358070 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:51:16.358087 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:51:16.358099 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:51:16.358119 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:51:24.692954 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:51:24.693086 | orchestrator | 2025-03-10 23:51:24.693106 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-03-10 23:51:24.693121 | orchestrator | Monday 10 March 2025 23:51:16 +0000 (0:00:34.691) 0:05:10.064 ********** 2025-03-10 23:51:24.693152 | orchestrator | changed: [testbed-manager] 2025-03-10 23:51:24.693220 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:51:24.693239 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:51:24.693260 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:51:24.695215 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:51:24.695497 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:51:24.696048 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:51:24.696460 | orchestrator | 2025-03-10 23:51:24.697101 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-03-10 23:51:24.697168 | orchestrator | Monday 10 March 2025 23:51:24 +0000 (0:00:08.340) 0:05:18.405 ********** 2025-03-10 23:51:32.713891 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:51:32.714148 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:51:32.714186 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:51:32.714283 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:51:32.714982 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:51:32.715540 | orchestrator | changed: [testbed-manager] 2025-03-10 23:51:32.716018 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:51:32.716388 | orchestrator | 2025-03-10 23:51:32.716889 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-03-10 23:51:32.716958 | orchestrator | Monday 10 March 2025 23:51:32 +0000 (0:00:08.023) 0:05:26.428 ********** 2025-03-10 23:51:34.464789 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:51:34.465456 | orchestrator | ok: [testbed-manager] 2025-03-10 23:51:34.465502 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:51:34.466403 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:51:34.466669 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:51:34.467214 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:51:34.467563 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:51:34.468943 | orchestrator | 2025-03-10 23:51:34.470057 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-03-10 23:51:34.470792 | orchestrator | Monday 10 March 2025 23:51:34 +0000 (0:00:01.749) 0:05:28.178 ********** 2025-03-10 23:51:40.748395 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:51:40.749594 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:51:40.749641 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:51:40.751032 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:51:40.751070 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:51:40.752953 | orchestrator | changed: [testbed-manager] 2025-03-10 23:51:40.753565 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:51:40.754958 | orchestrator | 2025-03-10 23:51:40.756118 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-03-10 23:51:40.756670 | orchestrator | Monday 10 March 2025 23:51:40 +0000 (0:00:06.283) 0:05:34.461 ********** 2025-03-10 23:51:41.251319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:51:41.251518 | orchestrator | 2025-03-10 23:51:41.251580 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-03-10 23:51:41.251639 | orchestrator | Monday 10 March 2025 23:51:41 +0000 (0:00:00.505) 0:05:34.966 ********** 2025-03-10 23:51:42.067916 | orchestrator | changed: [testbed-manager] 2025-03-10 23:51:42.070984 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:51:42.071039 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:51:42.071453 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:51:42.071477 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:51:42.071491 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:51:42.071504 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:51:42.071517 | orchestrator | 2025-03-10 23:51:42.071536 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-03-10 23:51:42.072226 | orchestrator | Monday 10 March 2025 23:51:42 +0000 (0:00:00.814) 0:05:35.781 ********** 2025-03-10 23:51:43.766585 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:51:43.766958 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:51:43.767267 | orchestrator | ok: [testbed-manager] 2025-03-10 23:51:43.768138 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:51:43.769136 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:51:43.769281 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:51:43.770204 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:51:43.770913 | orchestrator | 2025-03-10 23:51:43.771082 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-03-10 23:51:43.771646 | orchestrator | Monday 10 March 2025 23:51:43 +0000 (0:00:01.700) 0:05:37.482 ********** 2025-03-10 23:51:44.620340 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:51:44.620551 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:51:44.621477 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:51:44.621764 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:51:44.623110 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:51:44.624028 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:51:44.624737 | orchestrator | changed: [testbed-manager] 2025-03-10 23:51:44.625145 | orchestrator | 2025-03-10 23:51:44.625954 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-03-10 23:51:44.626846 | orchestrator | Monday 10 March 2025 23:51:44 +0000 (0:00:00.850) 0:05:38.332 ********** 2025-03-10 23:51:44.692955 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:51:44.729830 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:51:44.769115 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:51:44.807911 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:51:44.848294 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:51:44.921041 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:51:44.921993 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:51:44.926136 | orchestrator | 2025-03-10 23:51:45.045396 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-03-10 23:51:45.045449 | orchestrator | Monday 10 March 2025 23:51:44 +0000 (0:00:00.303) 0:05:38.635 ********** 2025-03-10 23:51:45.045473 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:51:45.091661 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:51:45.133922 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:51:45.173744 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:51:45.214949 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:51:45.447877 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:51:45.448632 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:51:45.450093 | orchestrator | 2025-03-10 23:51:45.451175 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-03-10 23:51:45.451814 | orchestrator | Monday 10 March 2025 23:51:45 +0000 (0:00:00.527) 0:05:39.163 ********** 2025-03-10 23:51:45.586217 | orchestrator | ok: [testbed-manager] 2025-03-10 23:51:45.635025 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:51:45.677027 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:51:45.720622 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:51:45.802629 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:51:45.803644 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:51:45.804824 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:51:45.805291 | orchestrator | 2025-03-10 23:51:45.806621 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-03-10 23:51:45.806760 | orchestrator | Monday 10 March 2025 23:51:45 +0000 (0:00:00.351) 0:05:39.515 ********** 2025-03-10 23:51:45.872844 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:51:45.925705 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:51:45.985583 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:51:46.040127 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:51:46.092051 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:51:46.183727 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:51:46.183854 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:51:46.185205 | orchestrator | 2025-03-10 23:51:46.186404 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-03-10 23:51:46.186848 | orchestrator | Monday 10 March 2025 23:51:46 +0000 (0:00:00.382) 0:05:39.898 ********** 2025-03-10 23:51:46.314159 | orchestrator | ok: [testbed-manager] 2025-03-10 23:51:46.356793 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:51:46.400134 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:51:46.442319 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:51:46.523956 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:51:46.525379 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:51:46.530636 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:51:46.532744 | orchestrator | 2025-03-10 23:51:46.533696 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-03-10 23:51:46.534616 | orchestrator | Monday 10 March 2025 23:51:46 +0000 (0:00:00.341) 0:05:40.239 ********** 2025-03-10 23:51:46.809459 | orchestrator | ok: [testbed-manager] =>  2025-03-10 23:51:46.809811 | orchestrator |  docker_version: 5:27.4.1 2025-03-10 23:51:46.848591 | orchestrator | ok: [testbed-node-0] =>  2025-03-10 23:51:46.849339 | orchestrator |  docker_version: 5:27.4.1 2025-03-10 23:51:46.885658 | orchestrator | ok: [testbed-node-1] =>  2025-03-10 23:51:46.886169 | orchestrator |  docker_version: 5:27.4.1 2025-03-10 23:51:46.924882 | orchestrator | ok: [testbed-node-2] =>  2025-03-10 23:51:46.925395 | orchestrator |  docker_version: 5:27.4.1 2025-03-10 23:51:47.011047 | orchestrator | ok: [testbed-node-3] =>  2025-03-10 23:51:47.015573 | orchestrator |  docker_version: 5:27.4.1 2025-03-10 23:51:47.016879 | orchestrator | ok: [testbed-node-4] =>  2025-03-10 23:51:47.016907 | orchestrator |  docker_version: 5:27.4.1 2025-03-10 23:51:47.016922 | orchestrator | ok: [testbed-node-5] =>  2025-03-10 23:51:47.016935 | orchestrator |  docker_version: 5:27.4.1 2025-03-10 23:51:47.016949 | orchestrator | 2025-03-10 23:51:47.016969 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-03-10 23:51:47.022377 | orchestrator | Monday 10 March 2025 23:51:47 +0000 (0:00:00.483) 0:05:40.723 ********** 2025-03-10 23:51:47.154767 | orchestrator | ok: [testbed-manager] =>  2025-03-10 23:51:47.155815 | orchestrator |  docker_cli_version: 5:27.4.1 2025-03-10 23:51:47.209330 | orchestrator | ok: [testbed-node-0] =>  2025-03-10 23:51:47.210114 | orchestrator |  docker_cli_version: 5:27.4.1 2025-03-10 23:51:47.259340 | orchestrator | ok: [testbed-node-1] =>  2025-03-10 23:51:47.260508 | orchestrator |  docker_cli_version: 5:27.4.1 2025-03-10 23:51:47.295793 | orchestrator | ok: [testbed-node-2] =>  2025-03-10 23:51:47.296883 | orchestrator |  docker_cli_version: 5:27.4.1 2025-03-10 23:51:47.404217 | orchestrator | ok: [testbed-node-3] =>  2025-03-10 23:51:47.404816 | orchestrator |  docker_cli_version: 5:27.4.1 2025-03-10 23:51:47.405616 | orchestrator | ok: [testbed-node-4] =>  2025-03-10 23:51:47.406280 | orchestrator |  docker_cli_version: 5:27.4.1 2025-03-10 23:51:47.406880 | orchestrator | ok: [testbed-node-5] =>  2025-03-10 23:51:47.407238 | orchestrator |  docker_cli_version: 5:27.4.1 2025-03-10 23:51:47.407799 | orchestrator | 2025-03-10 23:51:47.408433 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-03-10 23:51:47.408847 | orchestrator | Monday 10 March 2025 23:51:47 +0000 (0:00:00.396) 0:05:41.119 ********** 2025-03-10 23:51:47.512524 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:51:47.554185 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:51:47.590001 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:51:47.631935 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:51:47.675246 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:51:47.743959 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:51:47.744957 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:51:47.746334 | orchestrator | 2025-03-10 23:51:47.747090 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-03-10 23:51:47.752701 | orchestrator | Monday 10 March 2025 23:51:47 +0000 (0:00:00.341) 0:05:41.460 ********** 2025-03-10 23:51:47.819909 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:51:47.858300 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:51:47.896903 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:51:47.933309 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:51:48.003756 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:51:48.078534 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:51:48.079320 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:51:48.079996 | orchestrator | 2025-03-10 23:51:48.080685 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-03-10 23:51:48.081102 | orchestrator | Monday 10 March 2025 23:51:48 +0000 (0:00:00.332) 0:05:41.793 ********** 2025-03-10 23:51:48.617624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:51:48.619797 | orchestrator | 2025-03-10 23:51:48.619968 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-03-10 23:51:48.621371 | orchestrator | Monday 10 March 2025 23:51:48 +0000 (0:00:00.539) 0:05:42.333 ********** 2025-03-10 23:51:49.527792 | orchestrator | ok: [testbed-manager] 2025-03-10 23:51:49.528415 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:51:49.528434 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:51:49.528442 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:51:49.528450 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:51:49.528458 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:51:49.528465 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:51:49.528473 | orchestrator | 2025-03-10 23:51:49.528482 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-03-10 23:51:49.528495 | orchestrator | Monday 10 March 2025 23:51:49 +0000 (0:00:00.904) 0:05:43.237 ********** 2025-03-10 23:51:52.516398 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:51:52.516613 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:51:52.517802 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:51:52.519095 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:51:52.520099 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:51:52.521465 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:51:52.522658 | orchestrator | ok: [testbed-manager] 2025-03-10 23:51:52.522814 | orchestrator | 2025-03-10 23:51:52.525451 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-03-10 23:51:52.526456 | orchestrator | Monday 10 March 2025 23:51:52 +0000 (0:00:02.990) 0:05:46.228 ********** 2025-03-10 23:51:52.597871 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-03-10 23:51:52.867844 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-03-10 23:51:52.867962 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-03-10 23:51:52.869141 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-03-10 23:51:52.869205 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-03-10 23:51:52.870442 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-03-10 23:51:52.958621 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:51:52.959788 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-03-10 23:51:52.960435 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-03-10 23:51:52.960851 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-03-10 23:51:53.045885 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:51:53.046480 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-03-10 23:51:53.046963 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-03-10 23:51:53.146258 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-03-10 23:51:53.146776 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:51:53.146826 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-03-10 23:51:53.146930 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-03-10 23:51:53.147794 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-03-10 23:51:53.231176 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:51:53.233701 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-03-10 23:51:53.235912 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-03-10 23:51:53.236201 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-03-10 23:51:53.382121 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:51:53.382843 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:51:53.382880 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-03-10 23:51:53.383785 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-03-10 23:51:53.384439 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-03-10 23:51:53.384847 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:51:53.386064 | orchestrator | 2025-03-10 23:51:53.386985 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-03-10 23:51:53.387609 | orchestrator | Monday 10 March 2025 23:51:53 +0000 (0:00:00.868) 0:05:47.096 ********** 2025-03-10 23:52:00.114457 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:00.114655 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:00.114683 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:00.114705 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:00.115394 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:00.117720 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:01.258300 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:01.258442 | orchestrator | 2025-03-10 23:52:01.258462 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-03-10 23:52:01.258477 | orchestrator | Monday 10 March 2025 23:52:00 +0000 (0:00:06.732) 0:05:53.828 ********** 2025-03-10 23:52:01.258507 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:01.258576 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:01.259120 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:01.259883 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:01.260281 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:01.262146 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:01.264723 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:01.265589 | orchestrator | 2025-03-10 23:52:01.265623 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-03-10 23:52:01.265648 | orchestrator | Monday 10 March 2025 23:52:01 +0000 (0:00:01.142) 0:05:54.971 ********** 2025-03-10 23:52:09.524579 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:09.524802 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:09.525566 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:09.525601 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:09.526147 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:09.527532 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:09.528050 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:09.528329 | orchestrator | 2025-03-10 23:52:09.530101 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-03-10 23:52:09.531018 | orchestrator | Monday 10 March 2025 23:52:09 +0000 (0:00:08.267) 0:06:03.238 ********** 2025-03-10 23:52:13.235650 | orchestrator | changed: [testbed-manager] 2025-03-10 23:52:13.235814 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:13.238540 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:13.239234 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:13.239510 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:13.239538 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:13.240183 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:13.240957 | orchestrator | 2025-03-10 23:52:13.241676 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-03-10 23:52:13.244500 | orchestrator | Monday 10 March 2025 23:52:13 +0000 (0:00:03.709) 0:06:06.948 ********** 2025-03-10 23:52:14.626215 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:14.626436 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:14.626811 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:14.627681 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:14.628566 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:14.629216 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:14.629693 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:14.630596 | orchestrator | 2025-03-10 23:52:14.630669 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-03-10 23:52:14.631276 | orchestrator | Monday 10 March 2025 23:52:14 +0000 (0:00:01.392) 0:06:08.340 ********** 2025-03-10 23:52:16.041385 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:16.041605 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:16.042117 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:16.042157 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:16.042463 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:16.042749 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:16.042964 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:16.043510 | orchestrator | 2025-03-10 23:52:16.043758 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-03-10 23:52:16.044107 | orchestrator | Monday 10 March 2025 23:52:16 +0000 (0:00:01.414) 0:06:09.754 ********** 2025-03-10 23:52:16.289122 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:52:16.371017 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:52:16.443402 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:52:16.524231 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:52:16.736700 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:52:16.739481 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:52:16.742482 | orchestrator | changed: [testbed-manager] 2025-03-10 23:52:16.744224 | orchestrator | 2025-03-10 23:52:16.744260 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-03-10 23:52:16.744645 | orchestrator | Monday 10 March 2025 23:52:16 +0000 (0:00:00.698) 0:06:10.452 ********** 2025-03-10 23:52:26.928615 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:26.928795 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:26.930999 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:26.932884 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:26.934262 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:26.934802 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:26.934832 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:26.935736 | orchestrator | 2025-03-10 23:52:26.936454 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-03-10 23:52:26.937546 | orchestrator | Monday 10 March 2025 23:52:26 +0000 (0:00:10.186) 0:06:20.639 ********** 2025-03-10 23:52:27.764168 | orchestrator | changed: [testbed-manager] 2025-03-10 23:52:28.265096 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:28.265933 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:28.266554 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:28.267163 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:28.267592 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:28.268705 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:28.269531 | orchestrator | 2025-03-10 23:52:28.270456 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-03-10 23:52:28.270752 | orchestrator | Monday 10 March 2025 23:52:28 +0000 (0:00:01.338) 0:06:21.977 ********** 2025-03-10 23:52:37.697635 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:37.698162 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:37.700375 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:37.701597 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:37.703501 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:37.704835 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:37.705824 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:37.706880 | orchestrator | 2025-03-10 23:52:37.707471 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-03-10 23:52:37.708297 | orchestrator | Monday 10 March 2025 23:52:37 +0000 (0:00:09.432) 0:06:31.410 ********** 2025-03-10 23:52:48.526812 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:48.527047 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:48.527077 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:48.527093 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:48.527114 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:48.527568 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:48.527838 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:48.528411 | orchestrator | 2025-03-10 23:52:48.529024 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-03-10 23:52:48.529642 | orchestrator | Monday 10 March 2025 23:52:48 +0000 (0:00:10.828) 0:06:42.238 ********** 2025-03-10 23:52:49.009647 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-03-10 23:52:49.781379 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-03-10 23:52:49.781520 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-03-10 23:52:49.781752 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-03-10 23:52:49.782142 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-03-10 23:52:49.782474 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-03-10 23:52:49.783370 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-03-10 23:52:49.783568 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-03-10 23:52:49.783755 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-03-10 23:52:49.784417 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-03-10 23:52:49.784514 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-03-10 23:52:49.784929 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-03-10 23:52:49.785254 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-03-10 23:52:49.785564 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-03-10 23:52:49.785663 | orchestrator | 2025-03-10 23:52:49.785915 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-03-10 23:52:49.786414 | orchestrator | Monday 10 March 2025 23:52:49 +0000 (0:00:01.254) 0:06:43.493 ********** 2025-03-10 23:52:49.977012 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:52:50.058080 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:52:50.134971 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:52:50.225629 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:52:50.300628 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:52:50.424082 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:52:50.424522 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:52:50.425436 | orchestrator | 2025-03-10 23:52:50.426426 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-03-10 23:52:50.426924 | orchestrator | Monday 10 March 2025 23:52:50 +0000 (0:00:00.644) 0:06:44.138 ********** 2025-03-10 23:52:54.709682 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:54.710267 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:52:54.710362 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:52:54.710381 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:52:54.710406 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:52:54.710478 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:52:54.713833 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:52:54.713895 | orchestrator | 2025-03-10 23:52:54.714618 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-03-10 23:52:54.714856 | orchestrator | Monday 10 March 2025 23:52:54 +0000 (0:00:04.283) 0:06:48.422 ********** 2025-03-10 23:52:54.844130 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:52:54.920710 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:52:55.000739 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:52:55.086438 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:52:55.167644 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:52:55.270570 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:52:55.275901 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:52:55.361396 | orchestrator | 2025-03-10 23:52:55.361463 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-03-10 23:52:55.361481 | orchestrator | Monday 10 March 2025 23:52:55 +0000 (0:00:00.561) 0:06:48.983 ********** 2025-03-10 23:52:55.361508 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-03-10 23:52:55.456581 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-03-10 23:52:55.456685 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:52:55.457036 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-03-10 23:52:55.459259 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-03-10 23:52:55.535868 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:52:55.536397 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-03-10 23:52:55.537016 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-03-10 23:52:55.611906 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:52:55.612020 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-03-10 23:52:55.613959 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-03-10 23:52:55.698857 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:52:55.698992 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-03-10 23:52:55.699720 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-03-10 23:52:55.783331 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:52:55.783445 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-03-10 23:52:55.783575 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-03-10 23:52:55.901589 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:52:55.902782 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-03-10 23:52:55.903868 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-03-10 23:52:55.907619 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:52:56.067677 | orchestrator | 2025-03-10 23:52:56.067774 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-03-10 23:52:56.067791 | orchestrator | Monday 10 March 2025 23:52:55 +0000 (0:00:00.632) 0:06:49.616 ********** 2025-03-10 23:52:56.067820 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:52:56.137486 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:52:56.216481 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:52:56.285774 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:52:56.369559 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:52:56.492044 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:52:56.493483 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:52:56.493624 | orchestrator | 2025-03-10 23:52:56.493651 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-03-10 23:52:56.495581 | orchestrator | Monday 10 March 2025 23:52:56 +0000 (0:00:00.591) 0:06:50.207 ********** 2025-03-10 23:52:56.654929 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:52:56.738653 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:52:56.812220 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:52:56.878191 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:52:56.951047 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:52:57.092270 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:52:57.092440 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:52:57.092463 | orchestrator | 2025-03-10 23:52:57.092484 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-03-10 23:52:57.092955 | orchestrator | Monday 10 March 2025 23:52:57 +0000 (0:00:00.598) 0:06:50.806 ********** 2025-03-10 23:52:57.462668 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:52:57.532976 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:52:57.609856 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:52:57.713248 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:52:57.789637 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:52:57.911017 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:52:57.911339 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:52:57.911381 | orchestrator | 2025-03-10 23:52:57.912055 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-03-10 23:52:57.913104 | orchestrator | Monday 10 March 2025 23:52:57 +0000 (0:00:00.818) 0:06:51.624 ********** 2025-03-10 23:52:59.727825 | orchestrator | ok: [testbed-manager] 2025-03-10 23:52:59.728223 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:52:59.731132 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:52:59.731899 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:52:59.731949 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:52:59.732854 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:52:59.733726 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:52:59.733993 | orchestrator | 2025-03-10 23:52:59.735191 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-03-10 23:52:59.735840 | orchestrator | Monday 10 March 2025 23:52:59 +0000 (0:00:01.817) 0:06:53.442 ********** 2025-03-10 23:53:00.758081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:53:00.758515 | orchestrator | 2025-03-10 23:53:00.759000 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-03-10 23:53:00.759885 | orchestrator | Monday 10 March 2025 23:53:00 +0000 (0:00:01.030) 0:06:54.472 ********** 2025-03-10 23:53:01.319587 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:01.396117 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:02.045808 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:02.046108 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:02.046143 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:02.046165 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:02.046337 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:02.047154 | orchestrator | 2025-03-10 23:53:02.047856 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-03-10 23:53:02.048577 | orchestrator | Monday 10 March 2025 23:53:02 +0000 (0:00:01.285) 0:06:55.757 ********** 2025-03-10 23:53:02.545732 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:03.108515 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:03.108996 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:03.110156 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:03.111327 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:03.113141 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:03.113838 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:03.116715 | orchestrator | 2025-03-10 23:53:03.117402 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-03-10 23:53:03.118087 | orchestrator | Monday 10 March 2025 23:53:03 +0000 (0:00:01.062) 0:06:56.820 ********** 2025-03-10 23:53:04.529513 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:04.529898 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:04.529938 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:04.531818 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:04.533178 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:04.534678 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:04.535349 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:04.536245 | orchestrator | 2025-03-10 23:53:04.537911 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-03-10 23:53:04.538424 | orchestrator | Monday 10 March 2025 23:53:04 +0000 (0:00:01.419) 0:06:58.239 ********** 2025-03-10 23:53:04.684236 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:06.023755 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:06.024800 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:06.025367 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:06.026537 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:06.028941 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:06.030000 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:06.030520 | orchestrator | 2025-03-10 23:53:06.031395 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-03-10 23:53:06.031748 | orchestrator | Monday 10 March 2025 23:53:06 +0000 (0:00:01.499) 0:06:59.739 ********** 2025-03-10 23:53:07.397999 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:07.398487 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:07.398521 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:07.398536 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:07.398549 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:07.398563 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:07.398585 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:07.399218 | orchestrator | 2025-03-10 23:53:07.399601 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-03-10 23:53:07.399912 | orchestrator | Monday 10 March 2025 23:53:07 +0000 (0:00:01.369) 0:07:01.109 ********** 2025-03-10 23:53:09.108542 | orchestrator | changed: [testbed-manager] 2025-03-10 23:53:09.108740 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:09.108770 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:09.109003 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:09.109419 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:09.109453 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:09.109787 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:09.110169 | orchestrator | 2025-03-10 23:53:09.111501 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-03-10 23:53:10.130368 | orchestrator | Monday 10 March 2025 23:53:09 +0000 (0:00:01.713) 0:07:02.822 ********** 2025-03-10 23:53:10.130505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:53:10.131608 | orchestrator | 2025-03-10 23:53:10.131980 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-03-10 23:53:10.134668 | orchestrator | Monday 10 March 2025 23:53:10 +0000 (0:00:01.022) 0:07:03.845 ********** 2025-03-10 23:53:11.659889 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:11.660933 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:11.661533 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:11.662233 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:11.662690 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:11.664469 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:11.666127 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:11.666899 | orchestrator | 2025-03-10 23:53:11.668487 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-03-10 23:53:11.669576 | orchestrator | Monday 10 March 2025 23:53:11 +0000 (0:00:01.529) 0:07:05.374 ********** 2025-03-10 23:53:12.844151 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:12.844936 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:12.846756 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:12.847581 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:12.848578 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:12.849344 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:12.850753 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:12.850874 | orchestrator | 2025-03-10 23:53:12.852683 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-03-10 23:53:14.311419 | orchestrator | Monday 10 March 2025 23:53:12 +0000 (0:00:01.181) 0:07:06.555 ********** 2025-03-10 23:53:14.311549 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:14.311621 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:14.311649 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:14.311674 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:14.311706 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:14.312454 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:14.312567 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:14.312602 | orchestrator | 2025-03-10 23:53:14.312828 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-03-10 23:53:14.313169 | orchestrator | Monday 10 March 2025 23:53:14 +0000 (0:00:01.468) 0:07:08.023 ********** 2025-03-10 23:53:15.541978 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:15.542204 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:15.542255 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:15.542464 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:15.542495 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:15.542708 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:15.542804 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:15.543009 | orchestrator | 2025-03-10 23:53:15.543391 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-03-10 23:53:15.543521 | orchestrator | Monday 10 March 2025 23:53:15 +0000 (0:00:01.231) 0:07:09.255 ********** 2025-03-10 23:53:17.108572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:53:17.109098 | orchestrator | 2025-03-10 23:53:17.109132 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-10 23:53:17.109154 | orchestrator | Monday 10 March 2025 23:53:16 +0000 (0:00:01.012) 0:07:10.268 ********** 2025-03-10 23:53:17.110357 | orchestrator | 2025-03-10 23:53:17.110820 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-10 23:53:17.111478 | orchestrator | Monday 10 March 2025 23:53:16 +0000 (0:00:00.045) 0:07:10.313 ********** 2025-03-10 23:53:17.111699 | orchestrator | 2025-03-10 23:53:17.111783 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-10 23:53:17.112616 | orchestrator | Monday 10 March 2025 23:53:16 +0000 (0:00:00.041) 0:07:10.355 ********** 2025-03-10 23:53:17.112838 | orchestrator | 2025-03-10 23:53:17.114005 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-10 23:53:17.114304 | orchestrator | Monday 10 March 2025 23:53:16 +0000 (0:00:00.053) 0:07:10.408 ********** 2025-03-10 23:53:17.114895 | orchestrator | 2025-03-10 23:53:17.115012 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-10 23:53:17.116043 | orchestrator | Monday 10 March 2025 23:53:16 +0000 (0:00:00.045) 0:07:10.453 ********** 2025-03-10 23:53:17.116428 | orchestrator | 2025-03-10 23:53:17.116465 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-10 23:53:17.116941 | orchestrator | Monday 10 March 2025 23:53:16 +0000 (0:00:00.038) 0:07:10.492 ********** 2025-03-10 23:53:17.117215 | orchestrator | 2025-03-10 23:53:17.117709 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-10 23:53:17.118245 | orchestrator | Monday 10 March 2025 23:53:17 +0000 (0:00:00.283) 0:07:10.775 ********** 2025-03-10 23:53:17.119123 | orchestrator | 2025-03-10 23:53:17.119404 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-10 23:53:17.119826 | orchestrator | Monday 10 March 2025 23:53:17 +0000 (0:00:00.044) 0:07:10.820 ********** 2025-03-10 23:53:18.337099 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:18.337436 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:18.337893 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:18.338575 | orchestrator | 2025-03-10 23:53:18.339030 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-03-10 23:53:18.339503 | orchestrator | Monday 10 March 2025 23:53:18 +0000 (0:00:01.230) 0:07:12.050 ********** 2025-03-10 23:53:19.837845 | orchestrator | changed: [testbed-manager] 2025-03-10 23:53:19.839909 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:19.843025 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:19.845566 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:19.845587 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:19.845604 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:19.846190 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:19.847411 | orchestrator | 2025-03-10 23:53:19.848013 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-03-10 23:53:19.848868 | orchestrator | Monday 10 March 2025 23:53:19 +0000 (0:00:01.500) 0:07:13.550 ********** 2025-03-10 23:53:21.047024 | orchestrator | changed: [testbed-manager] 2025-03-10 23:53:21.047308 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:21.047344 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:21.048494 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:21.048643 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:21.049802 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:21.049961 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:21.050452 | orchestrator | 2025-03-10 23:53:21.050731 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-03-10 23:53:21.052625 | orchestrator | Monday 10 March 2025 23:53:21 +0000 (0:00:01.208) 0:07:14.759 ********** 2025-03-10 23:53:21.201256 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:23.073311 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:23.073482 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:23.075724 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:23.078631 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:23.079065 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:23.080581 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:23.080866 | orchestrator | 2025-03-10 23:53:23.080893 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-03-10 23:53:23.081785 | orchestrator | Monday 10 March 2025 23:53:23 +0000 (0:00:02.025) 0:07:16.784 ********** 2025-03-10 23:53:23.186845 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:53:23.187230 | orchestrator | 2025-03-10 23:53:23.187740 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-03-10 23:53:23.188344 | orchestrator | Monday 10 March 2025 23:53:23 +0000 (0:00:00.119) 0:07:16.903 ********** 2025-03-10 23:53:24.572166 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:24.572425 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:24.572488 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:24.572918 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:24.573795 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:24.574101 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:24.574733 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:24.574813 | orchestrator | 2025-03-10 23:53:24.575521 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-03-10 23:53:24.575904 | orchestrator | Monday 10 March 2025 23:53:24 +0000 (0:00:01.381) 0:07:18.285 ********** 2025-03-10 23:53:24.728458 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:24.798747 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:53:24.867937 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:53:24.952705 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:53:25.038542 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:53:25.168940 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:53:25.169911 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:53:25.171458 | orchestrator | 2025-03-10 23:53:25.173007 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-03-10 23:53:25.174744 | orchestrator | Monday 10 March 2025 23:53:25 +0000 (0:00:00.598) 0:07:18.883 ********** 2025-03-10 23:53:26.210705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:53:26.210858 | orchestrator | 2025-03-10 23:53:26.211058 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-03-10 23:53:26.211920 | orchestrator | Monday 10 March 2025 23:53:26 +0000 (0:00:01.041) 0:07:19.925 ********** 2025-03-10 23:53:26.693636 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:27.121086 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:27.122081 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:27.126562 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:27.127646 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:27.128801 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:27.132315 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:27.134112 | orchestrator | 2025-03-10 23:53:27.134141 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-03-10 23:53:28.099927 | orchestrator | Monday 10 March 2025 23:53:27 +0000 (0:00:00.911) 0:07:20.836 ********** 2025-03-10 23:53:28.100059 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-03-10 23:53:30.139543 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-03-10 23:53:30.140917 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-03-10 23:53:30.145047 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-03-10 23:53:30.146564 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-03-10 23:53:30.147908 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-03-10 23:53:30.148406 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-03-10 23:53:30.149051 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-03-10 23:53:30.149495 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-03-10 23:53:30.149964 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-03-10 23:53:30.150599 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-03-10 23:53:30.150662 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-03-10 23:53:30.151126 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-03-10 23:53:30.152900 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-03-10 23:53:30.153245 | orchestrator | 2025-03-10 23:53:30.153720 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-03-10 23:53:30.155459 | orchestrator | Monday 10 March 2025 23:53:30 +0000 (0:00:03.016) 0:07:23.853 ********** 2025-03-10 23:53:30.316094 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:30.387774 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:53:30.470464 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:53:30.541809 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:53:30.613900 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:53:30.740533 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:53:30.741256 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:53:30.741373 | orchestrator | 2025-03-10 23:53:30.742105 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-03-10 23:53:30.742608 | orchestrator | Monday 10 March 2025 23:53:30 +0000 (0:00:00.604) 0:07:24.457 ********** 2025-03-10 23:53:31.690987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:53:31.692088 | orchestrator | 2025-03-10 23:53:31.692945 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-03-10 23:53:31.693647 | orchestrator | Monday 10 March 2025 23:53:31 +0000 (0:00:00.942) 0:07:25.400 ********** 2025-03-10 23:53:32.185725 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:32.925324 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:32.925486 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:32.926845 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:32.927528 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:32.928331 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:32.929012 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:32.930433 | orchestrator | 2025-03-10 23:53:32.931163 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-03-10 23:53:32.931800 | orchestrator | Monday 10 March 2025 23:53:32 +0000 (0:00:01.240) 0:07:26.640 ********** 2025-03-10 23:53:33.357859 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:33.437903 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:33.845784 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:33.846779 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:33.847793 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:33.848955 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:33.849911 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:33.850797 | orchestrator | 2025-03-10 23:53:33.851655 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-03-10 23:53:33.852337 | orchestrator | Monday 10 March 2025 23:53:33 +0000 (0:00:00.919) 0:07:27.559 ********** 2025-03-10 23:53:34.009171 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:34.085848 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:53:34.164758 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:53:34.239436 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:53:34.323224 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:53:34.443724 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:53:34.445671 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:53:34.446099 | orchestrator | 2025-03-10 23:53:34.450121 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-03-10 23:53:34.452321 | orchestrator | Monday 10 March 2025 23:53:34 +0000 (0:00:00.598) 0:07:28.158 ********** 2025-03-10 23:53:36.094979 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:36.095584 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:36.098827 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:36.099629 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:36.099658 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:36.100816 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:36.101704 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:36.102349 | orchestrator | 2025-03-10 23:53:36.102857 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-03-10 23:53:36.103368 | orchestrator | Monday 10 March 2025 23:53:36 +0000 (0:00:01.651) 0:07:29.809 ********** 2025-03-10 23:53:36.253913 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:36.333405 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:53:36.410458 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:53:36.482495 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:53:36.570520 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:53:36.898509 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:53:36.899464 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:53:36.902066 | orchestrator | 2025-03-10 23:53:36.902137 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-03-10 23:53:36.902995 | orchestrator | Monday 10 March 2025 23:53:36 +0000 (0:00:00.803) 0:07:30.612 ********** 2025-03-10 23:53:44.899241 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:44.900072 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:44.900109 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:44.901324 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:44.902331 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:44.903009 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:44.904312 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:44.904856 | orchestrator | 2025-03-10 23:53:44.905774 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-03-10 23:53:44.906591 | orchestrator | Monday 10 March 2025 23:53:44 +0000 (0:00:07.997) 0:07:38.609 ********** 2025-03-10 23:53:46.337837 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:46.338004 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:46.339818 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:46.341088 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:46.343501 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:46.344830 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:46.344856 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:46.344875 | orchestrator | 2025-03-10 23:53:46.345589 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-03-10 23:53:46.346367 | orchestrator | Monday 10 March 2025 23:53:46 +0000 (0:00:01.440) 0:07:40.050 ********** 2025-03-10 23:53:48.342836 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:48.342994 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:48.343016 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:48.343036 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:48.343431 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:48.344396 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:48.345120 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:48.345407 | orchestrator | 2025-03-10 23:53:48.346351 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-03-10 23:53:48.346547 | orchestrator | Monday 10 March 2025 23:53:48 +0000 (0:00:02.005) 0:07:42.056 ********** 2025-03-10 23:53:50.398324 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:50.398686 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:53:50.405329 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:53:50.406724 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:53:50.407148 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:53:50.407891 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:53:50.408397 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:53:50.408744 | orchestrator | 2025-03-10 23:53:50.409488 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-10 23:53:50.409699 | orchestrator | Monday 10 March 2025 23:53:50 +0000 (0:00:02.055) 0:07:44.111 ********** 2025-03-10 23:53:50.923834 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:51.361346 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:51.361730 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:51.362800 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:51.363391 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:51.364213 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:51.365081 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:51.366611 | orchestrator | 2025-03-10 23:53:51.367087 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-10 23:53:51.367874 | orchestrator | Monday 10 March 2025 23:53:51 +0000 (0:00:00.966) 0:07:45.077 ********** 2025-03-10 23:53:51.533708 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:51.608717 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:53:51.694504 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:53:51.768289 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:53:51.847822 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:53:52.327814 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:53:52.328766 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:53:52.329459 | orchestrator | 2025-03-10 23:53:52.330591 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-03-10 23:53:52.331651 | orchestrator | Monday 10 March 2025 23:53:52 +0000 (0:00:00.962) 0:07:46.040 ********** 2025-03-10 23:53:52.501056 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:53:52.584533 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:53:52.666197 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:53:52.737550 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:53:52.833302 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:53:52.975074 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:53:52.975705 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:53:52.976783 | orchestrator | 2025-03-10 23:53:52.980704 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-03-10 23:53:53.131675 | orchestrator | Monday 10 March 2025 23:53:52 +0000 (0:00:00.650) 0:07:46.691 ********** 2025-03-10 23:53:53.131739 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:53.441458 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:53.521301 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:53.593675 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:53.679198 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:53.798812 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:53.799699 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:53.801287 | orchestrator | 2025-03-10 23:53:53.802530 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-03-10 23:53:53.805621 | orchestrator | Monday 10 March 2025 23:53:53 +0000 (0:00:00.820) 0:07:47.511 ********** 2025-03-10 23:53:53.956742 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:54.038497 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:54.111090 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:54.198178 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:54.274695 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:54.387450 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:54.387549 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:54.389130 | orchestrator | 2025-03-10 23:53:54.392092 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-03-10 23:53:54.555695 | orchestrator | Monday 10 March 2025 23:53:54 +0000 (0:00:00.590) 0:07:48.101 ********** 2025-03-10 23:53:54.555794 | orchestrator | ok: [testbed-manager] 2025-03-10 23:53:54.644996 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:53:54.725553 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:53:54.792896 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:53:54.861070 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:53:54.993215 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:53:54.993778 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:53:54.994453 | orchestrator | 2025-03-10 23:53:54.994943 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-03-10 23:53:54.995498 | orchestrator | Monday 10 March 2025 23:53:54 +0000 (0:00:00.607) 0:07:48.709 ********** 2025-03-10 23:54:00.826881 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:00.827368 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:00.827398 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:00.827419 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:00.827949 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:00.828302 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:00.828817 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:00.829743 | orchestrator | 2025-03-10 23:54:00.830594 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-03-10 23:54:00.831483 | orchestrator | Monday 10 March 2025 23:54:00 +0000 (0:00:05.831) 0:07:54.540 ********** 2025-03-10 23:54:00.989660 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:54:01.069192 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:54:01.162487 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:54:01.246449 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:54:01.606917 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:54:01.739511 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:54:01.740102 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:54:01.740384 | orchestrator | 2025-03-10 23:54:01.741081 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-03-10 23:54:01.741451 | orchestrator | Monday 10 March 2025 23:54:01 +0000 (0:00:00.913) 0:07:55.453 ********** 2025-03-10 23:54:02.740899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:54:02.741084 | orchestrator | 2025-03-10 23:54:02.741277 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-03-10 23:54:02.742001 | orchestrator | Monday 10 March 2025 23:54:02 +0000 (0:00:00.998) 0:07:56.452 ********** 2025-03-10 23:54:04.737593 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:04.738833 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:04.738863 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:04.738884 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:04.739337 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:04.739565 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:04.740308 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:04.740946 | orchestrator | 2025-03-10 23:54:04.741014 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-03-10 23:54:04.741035 | orchestrator | Monday 10 March 2025 23:54:04 +0000 (0:00:01.996) 0:07:58.448 ********** 2025-03-10 23:54:06.053351 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:06.053511 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:06.054343 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:06.055496 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:06.056061 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:06.056087 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:06.056434 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:06.057276 | orchestrator | 2025-03-10 23:54:06.057471 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-03-10 23:54:06.058240 | orchestrator | Monday 10 March 2025 23:54:06 +0000 (0:00:01.318) 0:07:59.767 ********** 2025-03-10 23:54:06.670214 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:06.750837 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:06.841340 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:07.299194 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:07.299832 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:07.299863 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:07.299888 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:07.300379 | orchestrator | 2025-03-10 23:54:07.301350 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-03-10 23:54:07.302261 | orchestrator | Monday 10 March 2025 23:54:07 +0000 (0:00:01.245) 0:08:01.013 ********** 2025-03-10 23:54:09.149030 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-10 23:54:09.149524 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-10 23:54:09.149673 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-10 23:54:09.149949 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-10 23:54:09.150672 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-10 23:54:09.151288 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-10 23:54:09.154886 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-10 23:54:09.156845 | orchestrator | 2025-03-10 23:54:09.156992 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-03-10 23:54:09.157731 | orchestrator | Monday 10 March 2025 23:54:09 +0000 (0:00:01.849) 0:08:02.862 ********** 2025-03-10 23:54:10.063160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:54:10.063374 | orchestrator | 2025-03-10 23:54:10.063751 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-03-10 23:54:10.065982 | orchestrator | Monday 10 March 2025 23:54:10 +0000 (0:00:00.915) 0:08:03.778 ********** 2025-03-10 23:54:19.618323 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:54:19.619126 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:54:19.619167 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:54:19.621176 | orchestrator | changed: [testbed-manager] 2025-03-10 23:54:19.622358 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:54:19.623697 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:54:19.624714 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:54:19.625068 | orchestrator | 2025-03-10 23:54:19.625665 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-03-10 23:54:19.626530 | orchestrator | Monday 10 March 2025 23:54:19 +0000 (0:00:09.554) 0:08:13.332 ********** 2025-03-10 23:54:21.654503 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:21.655125 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:21.655158 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:21.655797 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:21.658153 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:21.662513 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:21.663075 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:21.663657 | orchestrator | 2025-03-10 23:54:21.664323 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-03-10 23:54:21.665493 | orchestrator | Monday 10 March 2025 23:54:21 +0000 (0:00:02.033) 0:08:15.366 ********** 2025-03-10 23:54:23.238515 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:23.238673 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:23.238695 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:23.238714 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:23.240113 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:23.240585 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:23.243158 | orchestrator | 2025-03-10 23:54:23.244344 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-03-10 23:54:23.245371 | orchestrator | Monday 10 March 2025 23:54:23 +0000 (0:00:01.584) 0:08:16.951 ********** 2025-03-10 23:54:24.646922 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:54:24.648204 | orchestrator | changed: [testbed-manager] 2025-03-10 23:54:24.649298 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:54:24.650094 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:54:24.651308 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:54:24.652246 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:54:24.652949 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:54:24.654443 | orchestrator | 2025-03-10 23:54:24.655048 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-03-10 23:54:24.655077 | orchestrator | 2025-03-10 23:54:24.655851 | orchestrator | TASK [Include hardening role] ************************************************** 2025-03-10 23:54:24.656404 | orchestrator | Monday 10 March 2025 23:54:24 +0000 (0:00:01.409) 0:08:18.361 ********** 2025-03-10 23:54:24.810654 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:54:24.883964 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:54:24.954849 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:54:25.047317 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:54:25.115097 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:54:25.238361 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:54:25.238506 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:54:25.239418 | orchestrator | 2025-03-10 23:54:25.240367 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-03-10 23:54:25.241913 | orchestrator | 2025-03-10 23:54:25.242197 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-03-10 23:54:25.242247 | orchestrator | Monday 10 March 2025 23:54:25 +0000 (0:00:00.591) 0:08:18.952 ********** 2025-03-10 23:54:26.662698 | orchestrator | changed: [testbed-manager] 2025-03-10 23:54:26.664305 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:54:26.664341 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:54:26.665858 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:54:26.667912 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:54:26.670118 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:54:28.687984 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:54:28.688102 | orchestrator | 2025-03-10 23:54:28.688122 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-03-10 23:54:28.688137 | orchestrator | Monday 10 March 2025 23:54:26 +0000 (0:00:01.423) 0:08:20.376 ********** 2025-03-10 23:54:28.688166 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:28.689547 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:28.690517 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:28.693438 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:28.695797 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:28.911171 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:28.911252 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:28.911268 | orchestrator | 2025-03-10 23:54:28.911283 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-03-10 23:54:28.911297 | orchestrator | Monday 10 March 2025 23:54:28 +0000 (0:00:02.022) 0:08:22.398 ********** 2025-03-10 23:54:28.911320 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:54:28.997712 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:54:29.080250 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:54:29.143655 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:54:29.228480 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:54:29.718677 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:54:29.719474 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:54:29.719666 | orchestrator | 2025-03-10 23:54:29.719799 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-03-10 23:54:29.720083 | orchestrator | Monday 10 March 2025 23:54:29 +0000 (0:00:01.035) 0:08:23.434 ********** 2025-03-10 23:54:31.119695 | orchestrator | changed: [testbed-manager] 2025-03-10 23:54:31.121347 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:54:31.125239 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:54:31.126541 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:54:31.127249 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:54:31.127845 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:54:31.128403 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:54:31.129281 | orchestrator | 2025-03-10 23:54:31.129667 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-03-10 23:54:31.130361 | orchestrator | 2025-03-10 23:54:31.130733 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-03-10 23:54:31.132441 | orchestrator | Monday 10 March 2025 23:54:31 +0000 (0:00:01.396) 0:08:24.831 ********** 2025-03-10 23:54:32.280141 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:54:32.280388 | orchestrator | 2025-03-10 23:54:32.280414 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-10 23:54:32.280436 | orchestrator | Monday 10 March 2025 23:54:32 +0000 (0:00:01.161) 0:08:25.992 ********** 2025-03-10 23:54:32.775701 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:33.232590 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:33.232778 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:33.234773 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:33.235367 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:33.235474 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:33.236357 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:33.236995 | orchestrator | 2025-03-10 23:54:33.237822 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-10 23:54:33.238426 | orchestrator | Monday 10 March 2025 23:54:33 +0000 (0:00:00.953) 0:08:26.946 ********** 2025-03-10 23:54:34.457571 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:54:34.457747 | orchestrator | changed: [testbed-manager] 2025-03-10 23:54:34.457768 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:54:34.457783 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:54:34.457802 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:54:34.457916 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:54:34.459101 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:54:34.459264 | orchestrator | 2025-03-10 23:54:34.459809 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-03-10 23:54:34.460308 | orchestrator | Monday 10 March 2025 23:54:34 +0000 (0:00:01.219) 0:08:28.165 ********** 2025-03-10 23:54:35.600038 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:54:35.600419 | orchestrator | 2025-03-10 23:54:35.601239 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-10 23:54:35.602525 | orchestrator | Monday 10 March 2025 23:54:35 +0000 (0:00:01.148) 0:08:29.314 ********** 2025-03-10 23:54:36.467498 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:36.467677 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:36.467700 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:36.467720 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:36.468172 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:36.468438 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:36.468575 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:36.468888 | orchestrator | 2025-03-10 23:54:36.469394 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-10 23:54:36.469605 | orchestrator | Monday 10 March 2025 23:54:36 +0000 (0:00:00.866) 0:08:30.180 ********** 2025-03-10 23:54:37.687757 | orchestrator | changed: [testbed-manager] 2025-03-10 23:54:37.688135 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:54:37.688171 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:54:37.689221 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:54:37.689561 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:54:37.689912 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:54:37.690544 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:54:37.691367 | orchestrator | 2025-03-10 23:54:37.691843 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:54:37.692308 | orchestrator | 2025-03-10 23:54:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:54:37.692965 | orchestrator | 2025-03-10 23:54:37 | INFO  | Please wait and do not abort execution. 2025-03-10 23:54:37.692997 | orchestrator | testbed-manager : ok=163  changed=39  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-03-10 23:54:37.696028 | orchestrator | testbed-node-0 : ok=171  changed=67  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-10 23:54:37.697290 | orchestrator | testbed-node-1 : ok=171  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-10 23:54:37.698450 | orchestrator | testbed-node-2 : ok=171  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-10 23:54:37.698919 | orchestrator | testbed-node-3 : ok=170  changed=64  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-10 23:54:37.700050 | orchestrator | testbed-node-4 : ok=170  changed=64  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-10 23:54:37.700901 | orchestrator | testbed-node-5 : ok=170  changed=64  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-10 23:54:37.701597 | orchestrator | 2025-03-10 23:54:37.702608 | orchestrator | 2025-03-10 23:54:37.702732 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:54:37.703142 | orchestrator | Monday 10 March 2025 23:54:37 +0000 (0:00:01.222) 0:08:31.403 ********** 2025-03-10 23:54:37.704057 | orchestrator | =============================================================================== 2025-03-10 23:54:37.705458 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.00s 2025-03-10 23:54:37.706223 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.69s 2025-03-10 23:54:37.707224 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.54s 2025-03-10 23:54:37.709182 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------ 17.39s 2025-03-10 23:54:37.709467 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.83s 2025-03-10 23:54:37.710359 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.51s 2025-03-10 23:54:37.710826 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.34s 2025-03-10 23:54:37.711181 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.83s 2025-03-10 23:54:37.712129 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.19s 2025-03-10 23:54:37.713145 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.55s 2025-03-10 23:54:37.714230 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.43s 2025-03-10 23:54:37.715454 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.99s 2025-03-10 23:54:37.716464 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.34s 2025-03-10 23:54:37.717105 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.31s 2025-03-10 23:54:37.717826 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.27s 2025-03-10 23:54:37.718740 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.02s 2025-03-10 23:54:37.719252 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.00s 2025-03-10 23:54:37.720255 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.73s 2025-03-10 23:54:37.720484 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.28s 2025-03-10 23:54:37.721380 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.14s 2025-03-10 23:54:38.558097 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-03-10 23:54:40.860287 | orchestrator | + osism apply network 2025-03-10 23:54:40.860428 | orchestrator | 2025-03-10 23:54:40 | INFO  | Task 3423f58f-49ec-4baa-bd9d-a36eb4851fd5 (network) was prepared for execution. 2025-03-10 23:54:45.042643 | orchestrator | 2025-03-10 23:54:40 | INFO  | It takes a moment until task 3423f58f-49ec-4baa-bd9d-a36eb4851fd5 (network) has been started and output is visible here. 2025-03-10 23:54:45.042782 | orchestrator | 2025-03-10 23:54:45.042866 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-03-10 23:54:45.043311 | orchestrator | 2025-03-10 23:54:45.043795 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-03-10 23:54:45.044244 | orchestrator | Monday 10 March 2025 23:54:45 +0000 (0:00:00.248) 0:00:00.248 ********** 2025-03-10 23:54:45.144314 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-03-10 23:54:45.226337 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:45.308877 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:45.401863 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:45.519231 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:45.609941 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:45.873880 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:45.874978 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:45.875755 | orchestrator | 2025-03-10 23:54:45.875793 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-03-10 23:54:45.876424 | orchestrator | Monday 10 March 2025 23:54:45 +0000 (0:00:00.832) 0:00:01.081 ********** 2025-03-10 23:54:47.256604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:54:47.257872 | orchestrator | 2025-03-10 23:54:47.257976 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-03-10 23:54:49.332471 | orchestrator | Monday 10 March 2025 23:54:47 +0000 (0:00:01.377) 0:00:02.459 ********** 2025-03-10 23:54:49.332609 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:49.334363 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:49.335883 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:49.338681 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:49.339688 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:49.340732 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:49.341401 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:49.342606 | orchestrator | 2025-03-10 23:54:49.343011 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-03-10 23:54:49.343789 | orchestrator | Monday 10 March 2025 23:54:49 +0000 (0:00:02.080) 0:00:04.540 ********** 2025-03-10 23:54:51.163721 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:51.164258 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:51.164855 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:51.165493 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:51.166151 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:51.166588 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:51.167421 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:51.167875 | orchestrator | 2025-03-10 23:54:51.168373 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-03-10 23:54:51.168778 | orchestrator | Monday 10 March 2025 23:54:51 +0000 (0:00:01.825) 0:00:06.365 ********** 2025-03-10 23:54:51.749255 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-03-10 23:54:51.749407 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-03-10 23:54:51.750670 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-03-10 23:54:52.331389 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-03-10 23:54:52.331718 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-03-10 23:54:52.332612 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-03-10 23:54:52.333548 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-03-10 23:54:52.334491 | orchestrator | 2025-03-10 23:54:52.337802 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-03-10 23:54:52.338655 | orchestrator | Monday 10 March 2025 23:54:52 +0000 (0:00:01.175) 0:00:07.541 ********** 2025-03-10 23:54:54.641258 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-10 23:54:54.642384 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-10 23:54:54.642587 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-10 23:54:54.642619 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-10 23:54:54.643148 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-10 23:54:54.643887 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-10 23:54:54.644572 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-10 23:54:54.646508 | orchestrator | 2025-03-10 23:54:54.647059 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-03-10 23:54:54.647546 | orchestrator | Monday 10 March 2025 23:54:54 +0000 (0:00:02.309) 0:00:09.850 ********** 2025-03-10 23:54:56.441086 | orchestrator | changed: [testbed-manager] 2025-03-10 23:54:56.441800 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:54:56.443104 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:54:56.443319 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:54:56.444527 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:54:56.448067 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:54:56.448867 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:54:56.449734 | orchestrator | 2025-03-10 23:54:56.450419 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-03-10 23:54:56.450864 | orchestrator | Monday 10 March 2025 23:54:56 +0000 (0:00:01.797) 0:00:11.648 ********** 2025-03-10 23:54:57.136502 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-10 23:54:57.681249 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-10 23:54:57.681421 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-10 23:54:57.681502 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-10 23:54:57.681567 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-10 23:54:57.681588 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-10 23:54:57.682131 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-10 23:54:57.682416 | orchestrator | 2025-03-10 23:54:57.683225 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-03-10 23:54:57.683311 | orchestrator | Monday 10 March 2025 23:54:57 +0000 (0:00:01.243) 0:00:12.891 ********** 2025-03-10 23:54:58.189848 | orchestrator | ok: [testbed-manager] 2025-03-10 23:54:58.288920 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:54:59.078740 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:54:59.079716 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:54:59.083031 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:54:59.084262 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:54:59.084300 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:54:59.084315 | orchestrator | 2025-03-10 23:54:59.084336 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-03-10 23:54:59.085230 | orchestrator | Monday 10 March 2025 23:54:59 +0000 (0:00:01.391) 0:00:14.283 ********** 2025-03-10 23:54:59.279432 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:54:59.371583 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:54:59.466964 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:54:59.566910 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:54:59.687163 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:54:59.837422 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:54:59.838791 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:54:59.844915 | orchestrator | 2025-03-10 23:55:01.970529 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-03-10 23:55:01.970714 | orchestrator | Monday 10 March 2025 23:54:59 +0000 (0:00:00.758) 0:00:15.042 ********** 2025-03-10 23:55:01.970752 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:55:01.970830 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:55:01.970853 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:55:01.971501 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:01.975220 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:55:01.975909 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:55:01.977693 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:55:01.978866 | orchestrator | 2025-03-10 23:55:01.979538 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-03-10 23:55:01.981495 | orchestrator | Monday 10 March 2025 23:55:01 +0000 (0:00:02.132) 0:00:17.174 ********** 2025-03-10 23:55:02.247889 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:55:02.339231 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:55:02.435662 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:55:02.525087 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:55:02.952852 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:55:02.953702 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:55:02.954765 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-03-10 23:55:02.958803 | orchestrator | 2025-03-10 23:55:04.767532 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-03-10 23:55:04.767642 | orchestrator | Monday 10 March 2025 23:55:02 +0000 (0:00:00.987) 0:00:18.162 ********** 2025-03-10 23:55:04.767684 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:04.767828 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:55:04.768578 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:55:04.769166 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:55:04.769576 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:55:04.769895 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:55:04.771761 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:55:04.773426 | orchestrator | 2025-03-10 23:55:04.774092 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-03-10 23:55:04.774958 | orchestrator | Monday 10 March 2025 23:55:04 +0000 (0:00:01.807) 0:00:19.970 ********** 2025-03-10 23:55:06.322805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:55:06.323152 | orchestrator | 2025-03-10 23:55:06.324355 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-10 23:55:06.324821 | orchestrator | Monday 10 March 2025 23:55:06 +0000 (0:00:01.557) 0:00:21.527 ********** 2025-03-10 23:55:07.211476 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:55:07.653254 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:55:07.654204 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:07.655449 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:55:07.658107 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:55:07.658594 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:55:07.658619 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:55:07.658633 | orchestrator | 2025-03-10 23:55:07.658652 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-03-10 23:55:07.659623 | orchestrator | Monday 10 March 2025 23:55:07 +0000 (0:00:01.331) 0:00:22.858 ********** 2025-03-10 23:55:07.860579 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:07.948423 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:55:08.050402 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:55:08.149097 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:55:08.241137 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:55:08.412930 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:55:08.418243 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:55:09.182967 | orchestrator | 2025-03-10 23:55:09.183071 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-10 23:55:09.183087 | orchestrator | Monday 10 March 2025 23:55:08 +0000 (0:00:00.763) 0:00:23.622 ********** 2025-03-10 23:55:09.183114 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-10 23:55:09.186675 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-03-10 23:55:09.186819 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-10 23:55:09.816081 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-03-10 23:55:09.816239 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-10 23:55:09.816256 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-03-10 23:55:09.816270 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-10 23:55:09.816284 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-03-10 23:55:09.816298 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-10 23:55:09.816312 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-03-10 23:55:09.816339 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-10 23:55:09.816409 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-03-10 23:55:09.817639 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-10 23:55:09.818247 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-03-10 23:55:09.819046 | orchestrator | 2025-03-10 23:55:09.819991 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-03-10 23:55:09.823001 | orchestrator | Monday 10 March 2025 23:55:09 +0000 (0:00:01.400) 0:00:25.022 ********** 2025-03-10 23:55:10.033082 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:55:10.126377 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:55:10.215904 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:55:10.301277 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:55:10.386760 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:55:10.526361 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:55:10.526935 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:55:10.526987 | orchestrator | 2025-03-10 23:55:10.527300 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-03-10 23:55:10.527720 | orchestrator | Monday 10 March 2025 23:55:10 +0000 (0:00:00.714) 0:00:25.737 ********** 2025-03-10 23:55:14.747812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-2, testbed-node-3, testbed-node-5 2025-03-10 23:55:14.748043 | orchestrator | 2025-03-10 23:55:14.749464 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-03-10 23:55:14.751299 | orchestrator | Monday 10 March 2025 23:55:14 +0000 (0:00:04.214) 0:00:29.952 ********** 2025-03-10 23:55:20.398572 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:20.399073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:20.399123 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:20.401752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:20.402780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:20.402820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:20.402836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:20.402859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:20.403327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:20.403929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:20.404484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:20.405246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:20.405637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:20.405670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:20.405932 | orchestrator | 2025-03-10 23:55:20.406269 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-03-10 23:55:20.406476 | orchestrator | Monday 10 March 2025 23:55:20 +0000 (0:00:05.653) 0:00:35.606 ********** 2025-03-10 23:55:26.381657 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:26.383050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:26.384308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:26.386804 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:26.390305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:26.391008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:26.391321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:26.391919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:26.395512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:26.395935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-03-10 23:55:26.399885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:26.400400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:26.403224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:26.405889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-03-10 23:55:26.406221 | orchestrator | 2025-03-10 23:55:26.406942 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-03-10 23:55:26.407790 | orchestrator | Monday 10 March 2025 23:55:26 +0000 (0:00:05.979) 0:00:41.585 ********** 2025-03-10 23:55:27.822007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:55:27.822290 | orchestrator | 2025-03-10 23:55:27.823291 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-10 23:55:27.824879 | orchestrator | Monday 10 March 2025 23:55:27 +0000 (0:00:01.440) 0:00:43.025 ********** 2025-03-10 23:55:28.332309 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:28.444458 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:55:28.554731 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:55:29.012832 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:55:29.014950 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:55:29.015972 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:55:29.017408 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:55:29.017770 | orchestrator | 2025-03-10 23:55:29.018848 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-10 23:55:29.019905 | orchestrator | Monday 10 March 2025 23:55:29 +0000 (0:00:01.192) 0:00:44.218 ********** 2025-03-10 23:55:29.120353 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-10 23:55:29.120567 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-10 23:55:29.121688 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-10 23:55:29.122612 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-10 23:55:29.230601 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-10 23:55:29.231796 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-10 23:55:29.236029 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-10 23:55:29.349520 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-10 23:55:29.349562 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:55:29.350757 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-10 23:55:29.351405 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-10 23:55:29.353104 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-10 23:55:29.354527 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-10 23:55:29.689593 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:55:29.689768 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-10 23:55:29.689796 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-10 23:55:29.690395 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-10 23:55:29.839302 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-10 23:55:29.839410 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:55:29.840096 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-10 23:55:29.840123 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-10 23:55:29.840196 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-10 23:55:29.840477 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-10 23:55:29.994514 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:55:29.995483 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-10 23:55:29.997631 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-10 23:55:29.999310 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-10 23:55:30.000654 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-10 23:55:31.496851 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:55:31.497245 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:55:31.497288 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-03-10 23:55:31.500037 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-03-10 23:55:31.500846 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-03-10 23:55:31.502419 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-03-10 23:55:31.503857 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:55:31.504320 | orchestrator | 2025-03-10 23:55:31.504939 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-03-10 23:55:31.505668 | orchestrator | Monday 10 March 2025 23:55:31 +0000 (0:00:02.473) 0:00:46.692 ********** 2025-03-10 23:55:31.702387 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:55:31.806742 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:55:31.911599 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:55:32.011990 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:55:32.107786 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:55:32.443761 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:55:32.445854 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:55:32.448014 | orchestrator | 2025-03-10 23:55:32.448462 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-03-10 23:55:32.449712 | orchestrator | Monday 10 March 2025 23:55:32 +0000 (0:00:00.957) 0:00:47.650 ********** 2025-03-10 23:55:32.670779 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:55:32.760525 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:55:32.853470 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:55:32.949596 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:55:33.044079 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:55:33.087675 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:55:33.088548 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:55:33.090198 | orchestrator | 2025-03-10 23:55:33.091130 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:55:33.091326 | orchestrator | 2025-03-10 23:55:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:55:33.091971 | orchestrator | 2025-03-10 23:55:33 | INFO  | Please wait and do not abort execution. 2025-03-10 23:55:33.092005 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-10 23:55:33.092911 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-10 23:55:33.093860 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-10 23:55:33.094603 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-10 23:55:33.095573 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-10 23:55:33.096323 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-10 23:55:33.097488 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-10 23:55:33.098133 | orchestrator | 2025-03-10 23:55:33.098487 | orchestrator | 2025-03-10 23:55:33.099743 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:55:33.100145 | orchestrator | Monday 10 March 2025 23:55:33 +0000 (0:00:00.648) 0:00:48.298 ********** 2025-03-10 23:55:33.100378 | orchestrator | =============================================================================== 2025-03-10 23:55:33.100717 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.98s 2025-03-10 23:55:33.101046 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.65s 2025-03-10 23:55:33.101431 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.21s 2025-03-10 23:55:33.101591 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.47s 2025-03-10 23:55:33.101927 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.31s 2025-03-10 23:55:33.102292 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.13s 2025-03-10 23:55:33.102778 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.08s 2025-03-10 23:55:33.103109 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2025-03-10 23:55:33.103502 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.81s 2025-03-10 23:55:33.103739 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.80s 2025-03-10 23:55:33.104105 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.56s 2025-03-10 23:55:33.104723 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.44s 2025-03-10 23:55:33.105204 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.40s 2025-03-10 23:55:33.105683 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.39s 2025-03-10 23:55:33.106074 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.38s 2025-03-10 23:55:33.106395 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.33s 2025-03-10 23:55:33.106694 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.24s 2025-03-10 23:55:33.107138 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2025-03-10 23:55:33.107439 | orchestrator | osism.commons.network : Create required directories --------------------- 1.18s 2025-03-10 23:55:33.107832 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.99s 2025-03-10 23:55:33.822699 | orchestrator | + osism apply wireguard 2025-03-10 23:55:35.493792 | orchestrator | 2025-03-10 23:55:35 | INFO  | Task 7ef4d848-0e02-47ba-8dd5-d54471b84f19 (wireguard) was prepared for execution. 2025-03-10 23:55:39.195116 | orchestrator | 2025-03-10 23:55:35 | INFO  | It takes a moment until task 7ef4d848-0e02-47ba-8dd5-d54471b84f19 (wireguard) has been started and output is visible here. 2025-03-10 23:55:39.195320 | orchestrator | 2025-03-10 23:55:39.196282 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-03-10 23:55:39.198789 | orchestrator | 2025-03-10 23:55:39.200143 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-03-10 23:55:39.200713 | orchestrator | Monday 10 March 2025 23:55:39 +0000 (0:00:00.202) 0:00:00.202 ********** 2025-03-10 23:55:40.911466 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:40.911645 | orchestrator | 2025-03-10 23:55:40.915892 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-03-10 23:55:48.385450 | orchestrator | Monday 10 March 2025 23:55:40 +0000 (0:00:01.719) 0:00:01.922 ********** 2025-03-10 23:55:48.385601 | orchestrator | changed: [testbed-manager] 2025-03-10 23:55:48.386873 | orchestrator | 2025-03-10 23:55:48.386915 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-03-10 23:55:48.386938 | orchestrator | Monday 10 March 2025 23:55:48 +0000 (0:00:07.472) 0:00:09.395 ********** 2025-03-10 23:55:49.025206 | orchestrator | changed: [testbed-manager] 2025-03-10 23:55:49.025941 | orchestrator | 2025-03-10 23:55:49.026553 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-03-10 23:55:49.027874 | orchestrator | Monday 10 March 2025 23:55:49 +0000 (0:00:00.641) 0:00:10.037 ********** 2025-03-10 23:55:49.506354 | orchestrator | changed: [testbed-manager] 2025-03-10 23:55:49.506549 | orchestrator | 2025-03-10 23:55:49.508165 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-03-10 23:55:49.508845 | orchestrator | Monday 10 March 2025 23:55:49 +0000 (0:00:00.477) 0:00:10.515 ********** 2025-03-10 23:55:50.136863 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:50.137177 | orchestrator | 2025-03-10 23:55:50.137539 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-03-10 23:55:50.138326 | orchestrator | Monday 10 March 2025 23:55:50 +0000 (0:00:00.634) 0:00:11.149 ********** 2025-03-10 23:55:50.771033 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:50.771568 | orchestrator | 2025-03-10 23:55:50.772188 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-03-10 23:55:50.772723 | orchestrator | Monday 10 March 2025 23:55:50 +0000 (0:00:00.631) 0:00:11.782 ********** 2025-03-10 23:55:51.236085 | orchestrator | ok: [testbed-manager] 2025-03-10 23:55:51.236832 | orchestrator | 2025-03-10 23:55:51.237265 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-03-10 23:55:51.237642 | orchestrator | Monday 10 March 2025 23:55:51 +0000 (0:00:00.465) 0:00:12.247 ********** 2025-03-10 23:55:52.519037 | orchestrator | changed: [testbed-manager] 2025-03-10 23:55:52.520045 | orchestrator | 2025-03-10 23:55:52.520717 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-03-10 23:55:52.520764 | orchestrator | Monday 10 March 2025 23:55:52 +0000 (0:00:01.281) 0:00:13.528 ********** 2025-03-10 23:55:53.526659 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-10 23:55:53.526842 | orchestrator | changed: [testbed-manager] 2025-03-10 23:55:53.527105 | orchestrator | 2025-03-10 23:55:53.527465 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-03-10 23:55:53.528198 | orchestrator | Monday 10 March 2025 23:55:53 +0000 (0:00:01.008) 0:00:14.537 ********** 2025-03-10 23:55:55.385462 | orchestrator | changed: [testbed-manager] 2025-03-10 23:55:55.385730 | orchestrator | 2025-03-10 23:55:55.387772 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-03-10 23:55:55.388166 | orchestrator | Monday 10 March 2025 23:55:55 +0000 (0:00:01.857) 0:00:16.395 ********** 2025-03-10 23:55:56.353961 | orchestrator | changed: [testbed-manager] 2025-03-10 23:55:56.354249 | orchestrator | 2025-03-10 23:55:56.354296 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:55:56.354886 | orchestrator | 2025-03-10 23:55:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:55:56.355368 | orchestrator | 2025-03-10 23:55:56 | INFO  | Please wait and do not abort execution. 2025-03-10 23:55:56.355413 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:55:56.356471 | orchestrator | 2025-03-10 23:55:56.356778 | orchestrator | 2025-03-10 23:55:56.357231 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:55:56.357836 | orchestrator | Monday 10 March 2025 23:55:56 +0000 (0:00:00.971) 0:00:17.366 ********** 2025-03-10 23:55:56.358258 | orchestrator | =============================================================================== 2025-03-10 23:55:56.358488 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.47s 2025-03-10 23:55:56.358672 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.86s 2025-03-10 23:55:56.359151 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.72s 2025-03-10 23:55:56.359573 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.28s 2025-03-10 23:55:56.359794 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2025-03-10 23:55:56.360110 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-03-10 23:55:56.360499 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.64s 2025-03-10 23:55:56.360880 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.63s 2025-03-10 23:55:56.361188 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.63s 2025-03-10 23:55:56.361449 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2025-03-10 23:55:56.361772 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.47s 2025-03-10 23:55:57.103831 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-03-10 23:55:57.141808 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-03-10 23:55:57.141896 | orchestrator | Dload Upload Total Spent Left Speed 2025-03-10 23:55:57.214682 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 190 0 --:--:-- --:--:-- --:--:-- 191 2025-03-10 23:55:57.229819 | orchestrator | + osism apply --environment custom workarounds 2025-03-10 23:55:58.879903 | orchestrator | 2025-03-10 23:55:58 | INFO  | Trying to run play workarounds in environment custom 2025-03-10 23:55:58.936014 | orchestrator | 2025-03-10 23:55:58 | INFO  | Task af06768e-ae98-472e-a843-f0ae7ad2197b (workarounds) was prepared for execution. 2025-03-10 23:56:02.739121 | orchestrator | 2025-03-10 23:55:58 | INFO  | It takes a moment until task af06768e-ae98-472e-a843-f0ae7ad2197b (workarounds) has been started and output is visible here. 2025-03-10 23:56:02.739292 | orchestrator | 2025-03-10 23:56:02.740711 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-10 23:56:02.741616 | orchestrator | 2025-03-10 23:56:02.743061 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-03-10 23:56:02.744983 | orchestrator | Monday 10 March 2025 23:56:02 +0000 (0:00:00.168) 0:00:00.168 ********** 2025-03-10 23:56:02.933762 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-03-10 23:56:03.034243 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-03-10 23:56:03.136474 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-03-10 23:56:03.238871 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-03-10 23:56:03.449018 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-03-10 23:56:03.645846 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-03-10 23:56:03.646713 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-03-10 23:56:03.647461 | orchestrator | 2025-03-10 23:56:03.648604 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-03-10 23:56:03.648975 | orchestrator | 2025-03-10 23:56:03.649617 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-10 23:56:03.651727 | orchestrator | Monday 10 March 2025 23:56:03 +0000 (0:00:00.909) 0:00:01.078 ********** 2025-03-10 23:56:06.535945 | orchestrator | ok: [testbed-manager] 2025-03-10 23:56:06.536345 | orchestrator | 2025-03-10 23:56:06.540682 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-03-10 23:56:06.541290 | orchestrator | 2025-03-10 23:56:06.542969 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-10 23:56:06.543906 | orchestrator | Monday 10 March 2025 23:56:06 +0000 (0:00:02.884) 0:00:03.963 ********** 2025-03-10 23:56:08.481339 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:56:08.481528 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:56:08.482255 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:56:08.482356 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:56:08.482643 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:56:08.482691 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:56:08.483334 | orchestrator | 2025-03-10 23:56:08.483656 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-03-10 23:56:08.484322 | orchestrator | 2025-03-10 23:56:08.484878 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-03-10 23:56:08.485406 | orchestrator | Monday 10 March 2025 23:56:08 +0000 (0:00:01.945) 0:00:05.908 ********** 2025-03-10 23:56:10.078793 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-10 23:56:10.079107 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-10 23:56:10.079782 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-10 23:56:10.080470 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-10 23:56:10.082763 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-10 23:56:10.083352 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-10 23:56:10.084358 | orchestrator | 2025-03-10 23:56:10.085044 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-03-10 23:56:10.085608 | orchestrator | Monday 10 March 2025 23:56:10 +0000 (0:00:01.597) 0:00:07.505 ********** 2025-03-10 23:56:13.994729 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:56:13.995219 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:56:13.997757 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:56:13.998181 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:56:13.999888 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:56:14.000402 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:56:14.001731 | orchestrator | 2025-03-10 23:56:14.002948 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-03-10 23:56:14.004053 | orchestrator | Monday 10 March 2025 23:56:13 +0000 (0:00:03.918) 0:00:11.423 ********** 2025-03-10 23:56:14.206477 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:56:14.320098 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:56:14.407227 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:56:14.496573 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:56:14.867765 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:56:14.868045 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:56:14.869488 | orchestrator | 2025-03-10 23:56:14.870349 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-03-10 23:56:14.871224 | orchestrator | 2025-03-10 23:56:14.872047 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-03-10 23:56:14.873076 | orchestrator | Monday 10 March 2025 23:56:14 +0000 (0:00:00.876) 0:00:12.299 ********** 2025-03-10 23:56:16.943256 | orchestrator | changed: [testbed-manager] 2025-03-10 23:56:16.943421 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:56:16.943932 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:56:16.944207 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:56:16.944750 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:56:16.945918 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:56:16.947900 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:56:16.949104 | orchestrator | 2025-03-10 23:56:16.949945 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-03-10 23:56:16.950879 | orchestrator | Monday 10 March 2025 23:56:16 +0000 (0:00:02.073) 0:00:14.372 ********** 2025-03-10 23:56:18.765918 | orchestrator | changed: [testbed-manager] 2025-03-10 23:56:18.767300 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:56:18.767381 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:56:18.771280 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:56:18.771574 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:56:18.772768 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:56:18.773266 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:56:18.774166 | orchestrator | 2025-03-10 23:56:18.774719 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-03-10 23:56:18.775281 | orchestrator | Monday 10 March 2025 23:56:18 +0000 (0:00:01.819) 0:00:16.192 ********** 2025-03-10 23:56:20.361655 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:56:20.362101 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:56:20.363043 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:56:20.365618 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:56:20.366913 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:56:20.368192 | orchestrator | ok: [testbed-manager] 2025-03-10 23:56:20.368637 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:56:20.369523 | orchestrator | 2025-03-10 23:56:20.370600 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-03-10 23:56:20.371107 | orchestrator | Monday 10 March 2025 23:56:20 +0000 (0:00:01.600) 0:00:17.793 ********** 2025-03-10 23:56:22.266444 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:56:22.269076 | orchestrator | changed: [testbed-manager] 2025-03-10 23:56:22.269142 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:56:22.269286 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:56:22.269309 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:56:22.269321 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:56:22.269334 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:56:22.269346 | orchestrator | 2025-03-10 23:56:22.269360 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-03-10 23:56:22.269379 | orchestrator | Monday 10 March 2025 23:56:22 +0000 (0:00:01.897) 0:00:19.691 ********** 2025-03-10 23:56:22.439287 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:56:22.532942 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:56:22.631833 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:56:22.723096 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:56:22.814781 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:56:22.948161 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:56:22.948710 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:56:22.949710 | orchestrator | 2025-03-10 23:56:22.950755 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-03-10 23:56:22.951764 | orchestrator | 2025-03-10 23:56:22.952771 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-03-10 23:56:22.953572 | orchestrator | Monday 10 March 2025 23:56:22 +0000 (0:00:00.685) 0:00:20.376 ********** 2025-03-10 23:56:25.801757 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:56:25.802346 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:56:25.802974 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:56:25.803603 | orchestrator | ok: [testbed-manager] 2025-03-10 23:56:25.804378 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:56:25.805259 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:56:25.806451 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:56:25.807079 | orchestrator | 2025-03-10 23:56:25.809283 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:56:25.809625 | orchestrator | 2025-03-10 23:56:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:56:25.809774 | orchestrator | 2025-03-10 23:56:25 | INFO  | Please wait and do not abort execution. 2025-03-10 23:56:25.810326 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:56:25.810910 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:25.811509 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:25.812087 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:25.812401 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:25.812838 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:25.813322 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:25.813828 | orchestrator | 2025-03-10 23:56:25.814259 | orchestrator | 2025-03-10 23:56:25.814849 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:56:25.815355 | orchestrator | Monday 10 March 2025 23:56:25 +0000 (0:00:02.855) 0:00:23.232 ********** 2025-03-10 23:56:25.816263 | orchestrator | =============================================================================== 2025-03-10 23:56:25.816773 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.92s 2025-03-10 23:56:25.817390 | orchestrator | Apply netplan configuration --------------------------------------------- 2.88s 2025-03-10 23:56:25.818425 | orchestrator | Install python3-docker -------------------------------------------------- 2.86s 2025-03-10 23:56:25.818965 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 2.07s 2025-03-10 23:56:25.819599 | orchestrator | Apply netplan configuration --------------------------------------------- 1.95s 2025-03-10 23:56:25.819998 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.90s 2025-03-10 23:56:25.820456 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.82s 2025-03-10 23:56:25.820765 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2025-03-10 23:56:25.821249 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.60s 2025-03-10 23:56:25.821496 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.91s 2025-03-10 23:56:25.822518 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.88s 2025-03-10 23:56:25.823003 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.69s 2025-03-10 23:56:26.515015 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-03-10 23:56:28.169610 | orchestrator | 2025-03-10 23:56:28 | INFO  | Task 43f8a374-205a-4f41-a195-90a920c5540f (reboot) was prepared for execution. 2025-03-10 23:56:32.151216 | orchestrator | 2025-03-10 23:56:28 | INFO  | It takes a moment until task 43f8a374-205a-4f41-a195-90a920c5540f (reboot) has been started and output is visible here. 2025-03-10 23:56:32.151369 | orchestrator | 2025-03-10 23:56:32.257236 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-10 23:56:32.257404 | orchestrator | 2025-03-10 23:56:32.257425 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-10 23:56:32.257441 | orchestrator | Monday 10 March 2025 23:56:32 +0000 (0:00:00.172) 0:00:00.172 ********** 2025-03-10 23:56:32.257474 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:56:32.257557 | orchestrator | 2025-03-10 23:56:32.259592 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-10 23:56:32.259715 | orchestrator | Monday 10 March 2025 23:56:32 +0000 (0:00:00.108) 0:00:00.281 ********** 2025-03-10 23:56:33.265865 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:56:33.266429 | orchestrator | 2025-03-10 23:56:33.266987 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-10 23:56:33.267618 | orchestrator | Monday 10 March 2025 23:56:33 +0000 (0:00:01.007) 0:00:01.288 ********** 2025-03-10 23:56:33.382087 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:56:33.382258 | orchestrator | 2025-03-10 23:56:33.382957 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-10 23:56:33.383638 | orchestrator | 2025-03-10 23:56:33.383888 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-10 23:56:33.384388 | orchestrator | Monday 10 March 2025 23:56:33 +0000 (0:00:00.117) 0:00:01.406 ********** 2025-03-10 23:56:33.490678 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:56:33.491679 | orchestrator | 2025-03-10 23:56:33.493239 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-10 23:56:33.494500 | orchestrator | Monday 10 March 2025 23:56:33 +0000 (0:00:00.109) 0:00:01.515 ********** 2025-03-10 23:56:34.157715 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:56:34.157968 | orchestrator | 2025-03-10 23:56:34.158003 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-10 23:56:34.159213 | orchestrator | Monday 10 March 2025 23:56:34 +0000 (0:00:00.667) 0:00:02.183 ********** 2025-03-10 23:56:34.285518 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:56:34.286150 | orchestrator | 2025-03-10 23:56:34.286533 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-10 23:56:34.287085 | orchestrator | 2025-03-10 23:56:34.288453 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-10 23:56:34.288736 | orchestrator | Monday 10 March 2025 23:56:34 +0000 (0:00:00.124) 0:00:02.307 ********** 2025-03-10 23:56:34.526911 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:56:34.527460 | orchestrator | 2025-03-10 23:56:34.528217 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-10 23:56:34.528847 | orchestrator | Monday 10 March 2025 23:56:34 +0000 (0:00:00.245) 0:00:02.553 ********** 2025-03-10 23:56:35.175496 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:56:35.176455 | orchestrator | 2025-03-10 23:56:35.176487 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-10 23:56:35.176709 | orchestrator | Monday 10 March 2025 23:56:35 +0000 (0:00:00.646) 0:00:03.199 ********** 2025-03-10 23:56:35.299232 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:56:35.300469 | orchestrator | 2025-03-10 23:56:35.300777 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-10 23:56:35.302666 | orchestrator | 2025-03-10 23:56:35.302931 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-10 23:56:35.303896 | orchestrator | Monday 10 March 2025 23:56:35 +0000 (0:00:00.121) 0:00:03.321 ********** 2025-03-10 23:56:35.413035 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:56:35.413412 | orchestrator | 2025-03-10 23:56:35.414238 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-10 23:56:35.414521 | orchestrator | Monday 10 March 2025 23:56:35 +0000 (0:00:00.117) 0:00:03.438 ********** 2025-03-10 23:56:36.109235 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:56:36.109392 | orchestrator | 2025-03-10 23:56:36.109823 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-10 23:56:36.110385 | orchestrator | Monday 10 March 2025 23:56:36 +0000 (0:00:00.693) 0:00:04.132 ********** 2025-03-10 23:56:36.256687 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:56:36.260279 | orchestrator | 2025-03-10 23:56:36.260924 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-10 23:56:36.261585 | orchestrator | 2025-03-10 23:56:36.262525 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-10 23:56:36.262670 | orchestrator | Monday 10 March 2025 23:56:36 +0000 (0:00:00.145) 0:00:04.277 ********** 2025-03-10 23:56:36.368656 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:56:36.369596 | orchestrator | 2025-03-10 23:56:36.370197 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-10 23:56:36.371178 | orchestrator | Monday 10 March 2025 23:56:36 +0000 (0:00:00.116) 0:00:04.393 ********** 2025-03-10 23:56:37.033722 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:56:37.033837 | orchestrator | 2025-03-10 23:56:37.033854 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-10 23:56:37.034478 | orchestrator | Monday 10 March 2025 23:56:37 +0000 (0:00:00.664) 0:00:05.058 ********** 2025-03-10 23:56:37.147538 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:56:37.147613 | orchestrator | 2025-03-10 23:56:37.148496 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-10 23:56:37.148817 | orchestrator | 2025-03-10 23:56:37.150627 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-10 23:56:37.151686 | orchestrator | Monday 10 March 2025 23:56:37 +0000 (0:00:00.111) 0:00:05.170 ********** 2025-03-10 23:56:37.251050 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:56:37.251527 | orchestrator | 2025-03-10 23:56:37.251559 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-10 23:56:37.251581 | orchestrator | Monday 10 March 2025 23:56:37 +0000 (0:00:00.104) 0:00:05.275 ********** 2025-03-10 23:56:37.956281 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:56:37.956901 | orchestrator | 2025-03-10 23:56:37.957741 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-10 23:56:37.959840 | orchestrator | Monday 10 March 2025 23:56:37 +0000 (0:00:00.705) 0:00:05.980 ********** 2025-03-10 23:56:37.985320 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:56:37.985505 | orchestrator | 2025-03-10 23:56:37.985919 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:56:37.986298 | orchestrator | 2025-03-10 23:56:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:56:37.986431 | orchestrator | 2025-03-10 23:56:37 | INFO  | Please wait and do not abort execution. 2025-03-10 23:56:37.987307 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:37.987578 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:37.988321 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:37.988612 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:37.989497 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:37.991238 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:56:37.992518 | orchestrator | 2025-03-10 23:56:37.993317 | orchestrator | 2025-03-10 23:56:37.993652 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:56:37.994488 | orchestrator | Monday 10 March 2025 23:56:37 +0000 (0:00:00.030) 0:00:06.011 ********** 2025-03-10 23:56:37.995021 | orchestrator | =============================================================================== 2025-03-10 23:56:37.995350 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.39s 2025-03-10 23:56:37.995541 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2025-03-10 23:56:37.995982 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-03-10 23:56:38.620595 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-03-10 23:56:40.313187 | orchestrator | 2025-03-10 23:56:40 | INFO  | Task e0b55dff-783a-486c-8b73-de3028b31e22 (wait-for-connection) was prepared for execution. 2025-03-10 23:56:44.000494 | orchestrator | 2025-03-10 23:56:40 | INFO  | It takes a moment until task e0b55dff-783a-486c-8b73-de3028b31e22 (wait-for-connection) has been started and output is visible here. 2025-03-10 23:56:44.000631 | orchestrator | 2025-03-10 23:56:56.423025 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-03-10 23:56:56.423206 | orchestrator | 2025-03-10 23:56:56.423228 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-03-10 23:56:56.423243 | orchestrator | Monday 10 March 2025 23:56:43 +0000 (0:00:00.275) 0:00:00.275 ********** 2025-03-10 23:56:56.423274 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:56:56.424248 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:56:56.424306 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:56:56.425507 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:56:56.425758 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:56:56.426426 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:56:56.426463 | orchestrator | 2025-03-10 23:56:56.427211 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:56:56.427598 | orchestrator | 2025-03-10 23:56:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:56:56.428021 | orchestrator | 2025-03-10 23:56:56 | INFO  | Please wait and do not abort execution. 2025-03-10 23:56:56.428053 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:56:56.428382 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:56:56.429059 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:56:56.429718 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:56:56.430108 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:56:56.430609 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:56:56.430986 | orchestrator | 2025-03-10 23:56:56.433592 | orchestrator | 2025-03-10 23:56:56.434244 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:56:56.434522 | orchestrator | Monday 10 March 2025 23:56:56 +0000 (0:00:12.420) 0:00:12.696 ********** 2025-03-10 23:56:56.435028 | orchestrator | =============================================================================== 2025-03-10 23:56:56.435789 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.42s 2025-03-10 23:56:57.149242 | orchestrator | + osism apply hddtemp 2025-03-10 23:56:58.900478 | orchestrator | 2025-03-10 23:56:58 | INFO  | Task 35152574-53d1-4259-8343-8aab100a9601 (hddtemp) was prepared for execution. 2025-03-10 23:57:02.642541 | orchestrator | 2025-03-10 23:56:58 | INFO  | It takes a moment until task 35152574-53d1-4259-8343-8aab100a9601 (hddtemp) has been started and output is visible here. 2025-03-10 23:57:02.642671 | orchestrator | 2025-03-10 23:57:02.643038 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-03-10 23:57:02.643638 | orchestrator | 2025-03-10 23:57:02.644669 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-03-10 23:57:02.649839 | orchestrator | Monday 10 March 2025 23:57:02 +0000 (0:00:00.238) 0:00:00.238 ********** 2025-03-10 23:57:02.821347 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:02.908295 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:57:02.997827 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:57:03.092303 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:57:03.200025 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:57:03.454282 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:57:03.455206 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:57:03.455759 | orchestrator | 2025-03-10 23:57:03.456659 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-03-10 23:57:03.457910 | orchestrator | Monday 10 March 2025 23:57:03 +0000 (0:00:00.813) 0:00:01.052 ********** 2025-03-10 23:57:04.813675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:57:04.814889 | orchestrator | 2025-03-10 23:57:04.814937 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-03-10 23:57:04.817043 | orchestrator | Monday 10 March 2025 23:57:04 +0000 (0:00:01.358) 0:00:02.410 ********** 2025-03-10 23:57:06.881319 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:06.881790 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:57:06.882149 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:57:06.882218 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:57:06.887186 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:57:06.888258 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:57:06.888281 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:57:06.888294 | orchestrator | 2025-03-10 23:57:06.888312 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-03-10 23:57:06.888643 | orchestrator | Monday 10 March 2025 23:57:06 +0000 (0:00:02.070) 0:00:04.480 ********** 2025-03-10 23:57:07.629076 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:57:07.740027 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:57:08.219684 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:08.222377 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:57:08.222702 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:57:08.222992 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:57:08.225297 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:57:08.225555 | orchestrator | 2025-03-10 23:57:08.225863 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-03-10 23:57:08.226429 | orchestrator | Monday 10 March 2025 23:57:08 +0000 (0:00:01.334) 0:00:05.815 ********** 2025-03-10 23:57:09.469243 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:57:09.470543 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:57:09.470625 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:57:09.474181 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:57:10.003340 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:57:10.003454 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:10.003471 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:57:10.003517 | orchestrator | 2025-03-10 23:57:10.003534 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-03-10 23:57:10.003550 | orchestrator | Monday 10 March 2025 23:57:09 +0000 (0:00:01.253) 0:00:07.069 ********** 2025-03-10 23:57:10.003580 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:57:10.120746 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:57:10.254871 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:10.349326 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:57:10.497101 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:57:10.497283 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:57:10.498354 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:57:10.499246 | orchestrator | 2025-03-10 23:57:10.499560 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-03-10 23:57:10.500401 | orchestrator | Monday 10 March 2025 23:57:10 +0000 (0:00:01.024) 0:00:08.093 ********** 2025-03-10 23:57:23.774141 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:23.776427 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:57:23.776495 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:57:23.776520 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:57:23.776575 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:57:23.776596 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:57:23.778936 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:57:23.779652 | orchestrator | 2025-03-10 23:57:23.779685 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-03-10 23:57:23.780063 | orchestrator | Monday 10 March 2025 23:57:23 +0000 (0:00:13.275) 0:00:21.368 ********** 2025-03-10 23:57:25.122724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-10 23:57:25.122996 | orchestrator | 2025-03-10 23:57:25.128434 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-03-10 23:57:27.158979 | orchestrator | Monday 10 March 2025 23:57:25 +0000 (0:00:01.349) 0:00:22.717 ********** 2025-03-10 23:57:27.159149 | orchestrator | changed: [testbed-node-1] 2025-03-10 23:57:27.160619 | orchestrator | changed: [testbed-node-2] 2025-03-10 23:57:27.162714 | orchestrator | changed: [testbed-node-0] 2025-03-10 23:57:27.164363 | orchestrator | changed: [testbed-manager] 2025-03-10 23:57:27.164392 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:57:27.165655 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:57:27.166729 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:57:27.167618 | orchestrator | 2025-03-10 23:57:27.168342 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:57:27.169322 | orchestrator | 2025-03-10 23:57:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:57:27.170129 | orchestrator | 2025-03-10 23:57:27 | INFO  | Please wait and do not abort execution. 2025-03-10 23:57:27.170168 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:57:27.171143 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:27.171678 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:27.172560 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:27.173493 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:27.174474 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:27.175241 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:27.175550 | orchestrator | 2025-03-10 23:57:27.176570 | orchestrator | 2025-03-10 23:57:27.176921 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:57:27.177519 | orchestrator | Monday 10 March 2025 23:57:27 +0000 (0:00:02.039) 0:00:24.757 ********** 2025-03-10 23:57:27.178146 | orchestrator | =============================================================================== 2025-03-10 23:57:27.178586 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.28s 2025-03-10 23:57:27.179186 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.07s 2025-03-10 23:57:27.179702 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.04s 2025-03-10 23:57:27.180558 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.36s 2025-03-10 23:57:27.180953 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.35s 2025-03-10 23:57:27.181447 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.33s 2025-03-10 23:57:27.181790 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2025-03-10 23:57:27.182398 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 1.02s 2025-03-10 23:57:27.182750 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.81s 2025-03-10 23:57:27.889612 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-03-10 23:57:29.601042 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-10 23:57:29.601267 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-10 23:57:29.601311 | orchestrator | + local max_attempts=60 2025-03-10 23:57:29.601326 | orchestrator | + local name=ceph-ansible 2025-03-10 23:57:29.601341 | orchestrator | + local attempt_num=1 2025-03-10 23:57:29.601361 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-10 23:57:29.639129 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-10 23:57:29.639415 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-10 23:57:29.639442 | orchestrator | + local max_attempts=60 2025-03-10 23:57:29.639457 | orchestrator | + local name=kolla-ansible 2025-03-10 23:57:29.639473 | orchestrator | + local attempt_num=1 2025-03-10 23:57:29.639493 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-10 23:57:29.676647 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-10 23:57:29.677911 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-10 23:57:29.677937 | orchestrator | + local max_attempts=60 2025-03-10 23:57:29.677953 | orchestrator | + local name=osism-ansible 2025-03-10 23:57:29.677968 | orchestrator | + local attempt_num=1 2025-03-10 23:57:29.677988 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-10 23:57:29.711291 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-10 23:57:29.711450 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-10 23:57:29.711546 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-10 23:57:30.156141 | orchestrator | ARA in ceph-ansible already disabled. 2025-03-10 23:57:30.638475 | orchestrator | ARA in kolla-ansible already disabled. 2025-03-10 23:57:31.005445 | orchestrator | ARA in osism-ansible already disabled. 2025-03-10 23:57:31.383659 | orchestrator | ARA in osism-kubernetes already disabled. 2025-03-10 23:57:31.384314 | orchestrator | + osism apply gather-facts 2025-03-10 23:57:33.070628 | orchestrator | 2025-03-10 23:57:33 | INFO  | Task 19ecd604-c9de-471f-8ad0-f51eee5a427c (gather-facts) was prepared for execution. 2025-03-10 23:57:36.755894 | orchestrator | 2025-03-10 23:57:33 | INFO  | It takes a moment until task 19ecd604-c9de-471f-8ad0-f51eee5a427c (gather-facts) has been started and output is visible here. 2025-03-10 23:57:36.756036 | orchestrator | 2025-03-10 23:57:36.756617 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-10 23:57:36.756655 | orchestrator | 2025-03-10 23:57:36.757802 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-10 23:57:36.757994 | orchestrator | Monday 10 March 2025 23:57:36 +0000 (0:00:00.189) 0:00:00.189 ********** 2025-03-10 23:57:42.225164 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:57:42.225765 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:57:42.225802 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:57:42.225826 | orchestrator | ok: [testbed-manager] 2025-03-10 23:57:42.226362 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:57:42.227399 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:57:42.228402 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:57:42.229037 | orchestrator | 2025-03-10 23:57:42.229657 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-10 23:57:42.231128 | orchestrator | 2025-03-10 23:57:42.231420 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-10 23:57:42.231451 | orchestrator | Monday 10 March 2025 23:57:42 +0000 (0:00:05.472) 0:00:05.661 ********** 2025-03-10 23:57:42.405595 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:57:42.511953 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:57:42.601983 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:57:42.691712 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:57:42.776707 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:57:42.813917 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:57:42.814993 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:57:42.816504 | orchestrator | 2025-03-10 23:57:42.817938 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:57:42.818175 | orchestrator | 2025-03-10 23:57:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:57:42.818621 | orchestrator | 2025-03-10 23:57:42 | INFO  | Please wait and do not abort execution. 2025-03-10 23:57:42.818655 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:42.818755 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:42.819558 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:42.820019 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:42.820058 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:42.820174 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:42.821262 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-10 23:57:42.821735 | orchestrator | 2025-03-10 23:57:42.822475 | orchestrator | 2025-03-10 23:57:42.822808 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:57:42.823236 | orchestrator | Monday 10 March 2025 23:57:42 +0000 (0:00:00.590) 0:00:06.251 ********** 2025-03-10 23:57:42.823748 | orchestrator | =============================================================================== 2025-03-10 23:57:42.824376 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.47s 2025-03-10 23:57:42.824686 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-03-10 23:57:43.548968 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-03-10 23:57:43.562406 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-03-10 23:57:43.576782 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-03-10 23:57:43.591938 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-03-10 23:57:43.611607 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-03-10 23:57:43.626587 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-03-10 23:57:43.639242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-03-10 23:57:43.654214 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-03-10 23:57:43.670620 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-03-10 23:57:43.690396 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-03-10 23:57:43.703765 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-03-10 23:57:43.717631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-03-10 23:57:43.733558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-03-10 23:57:43.747897 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-03-10 23:57:43.761513 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-03-10 23:57:43.776493 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-03-10 23:57:43.793521 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-03-10 23:57:43.807491 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-03-10 23:57:43.823524 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-03-10 23:57:43.836461 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-03-10 23:57:43.850468 | orchestrator | + [[ false == \t\r\u\e ]] 2025-03-10 23:57:44.275215 | orchestrator | changed 2025-03-10 23:57:44.371427 | 2025-03-10 23:57:44.371549 | TASK [Deploy services] 2025-03-10 23:57:44.478670 | orchestrator | skipping: Conditional result was False 2025-03-10 23:57:44.501017 | 2025-03-10 23:57:44.501161 | TASK [Deploy in a nutshell] 2025-03-10 23:57:45.157122 | orchestrator | + set -e 2025-03-10 23:57:45.157247 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-10 23:57:45.157260 | orchestrator | ++ export INTERACTIVE=false 2025-03-10 23:57:45.157267 | orchestrator | ++ INTERACTIVE=false 2025-03-10 23:57:45.157288 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-10 23:57:45.157295 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-10 23:57:45.157300 | orchestrator | + source /opt/manager-vars.sh 2025-03-10 23:57:45.157309 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-10 23:57:45.157318 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-10 23:57:45.157323 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-10 23:57:45.157328 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-10 23:57:45.157333 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-10 23:57:45.157339 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-10 23:57:45.157344 | orchestrator | ++ export MANAGER_VERSION=latest 2025-03-10 23:57:45.157349 | orchestrator | ++ MANAGER_VERSION=latest 2025-03-10 23:57:45.157354 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-10 23:57:45.157359 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-10 23:57:45.157364 | orchestrator | ++ export ARA=false 2025-03-10 23:57:45.157369 | orchestrator | ++ ARA=false 2025-03-10 23:57:45.157374 | orchestrator | ++ export TEMPEST=false 2025-03-10 23:57:45.157379 | orchestrator | ++ TEMPEST=false 2025-03-10 23:57:45.157384 | orchestrator | ++ export IS_ZUUL=true 2025-03-10 23:57:45.157388 | orchestrator | ++ IS_ZUUL=true 2025-03-10 23:57:45.157393 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2025-03-10 23:57:45.157399 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.35 2025-03-10 23:57:45.157408 | orchestrator | 2025-03-10 23:57:45.158500 | orchestrator | # PULL IMAGES 2025-03-10 23:57:45.158513 | orchestrator | 2025-03-10 23:57:45.158519 | orchestrator | ++ export EXTERNAL_API=false 2025-03-10 23:57:45.158525 | orchestrator | ++ EXTERNAL_API=false 2025-03-10 23:57:45.158531 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-10 23:57:45.158537 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-10 23:57:45.158547 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-10 23:57:45.158553 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-10 23:57:45.158560 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-10 23:57:45.158566 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-10 23:57:45.158571 | orchestrator | + echo 2025-03-10 23:57:45.158577 | orchestrator | + echo '# PULL IMAGES' 2025-03-10 23:57:45.158583 | orchestrator | + echo 2025-03-10 23:57:45.158592 | orchestrator | ++ semver latest 7.0.0 2025-03-10 23:57:45.203657 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-10 23:57:46.893509 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-03-10 23:57:46.893606 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-03-10 23:57:46.893654 | orchestrator | 2025-03-10 23:57:46 | INFO  | Trying to run play pull-images in environment custom 2025-03-10 23:57:46.953869 | orchestrator | 2025-03-10 23:57:46 | INFO  | Task cd17fb85-e8fc-4cfb-9980-0f831bc9001a (pull-images) was prepared for execution. 2025-03-10 23:57:50.856821 | orchestrator | 2025-03-10 23:57:46 | INFO  | It takes a moment until task cd17fb85-e8fc-4cfb-9980-0f831bc9001a (pull-images) has been started and output is visible here. 2025-03-10 23:57:50.856956 | orchestrator | 2025-03-10 23:57:50.857830 | orchestrator | PLAY [Pull images] ************************************************************* 2025-03-10 23:57:50.857873 | orchestrator | 2025-03-10 23:57:50.858577 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-03-10 23:57:50.859261 | orchestrator | Monday 10 March 2025 23:57:50 +0000 (0:00:00.180) 0:00:00.180 ********** 2025-03-10 23:58:24.524423 | orchestrator | changed: [testbed-manager] 2025-03-10 23:59:25.538655 | orchestrator | 2025-03-10 23:59:25.538818 | orchestrator | TASK [Pull other images] ******************************************************* 2025-03-10 23:59:25.538844 | orchestrator | Monday 10 March 2025 23:58:24 +0000 (0:00:33.665) 0:00:33.845 ********** 2025-03-10 23:59:25.538878 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-03-10 23:59:25.543227 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-03-10 23:59:25.543306 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-03-10 23:59:25.543342 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-03-10 23:59:25.543381 | orchestrator | changed: [testbed-manager] => (item=common) 2025-03-10 23:59:25.543406 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-03-10 23:59:25.543434 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-03-10 23:59:25.543493 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-03-10 23:59:25.543519 | orchestrator | changed: [testbed-manager] => (item=heat) 2025-03-10 23:59:25.543549 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-03-10 23:59:25.543573 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-03-10 23:59:25.543596 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-03-10 23:59:25.543619 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-03-10 23:59:25.543642 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-03-10 23:59:25.543664 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-03-10 23:59:25.543688 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-03-10 23:59:25.543711 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-03-10 23:59:25.543735 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-03-10 23:59:25.543758 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-03-10 23:59:25.543799 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-03-10 23:59:25.544034 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-03-10 23:59:25.544250 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-03-10 23:59:25.544275 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-03-10 23:59:25.544291 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-03-10 23:59:25.544335 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-03-10 23:59:25.544365 | orchestrator | 2025-03-10 23:59:25.544525 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:59:25.546177 | orchestrator | 2025-03-10 23:59:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:59:25.546264 | orchestrator | 2025-03-10 23:59:25 | INFO  | Please wait and do not abort execution. 2025-03-10 23:59:25.546298 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-10 23:59:25.546378 | orchestrator | 2025-03-10 23:59:25.546959 | orchestrator | 2025-03-10 23:59:25.547691 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:59:25.548244 | orchestrator | Monday 10 March 2025 23:59:25 +0000 (0:01:01.008) 0:01:34.854 ********** 2025-03-10 23:59:25.548488 | orchestrator | =============================================================================== 2025-03-10 23:59:25.548860 | orchestrator | Pull other images ------------------------------------------------------ 61.01s 2025-03-10 23:59:25.549191 | orchestrator | Pull keystone image ---------------------------------------------------- 33.67s 2025-03-10 23:59:27.894823 | orchestrator | 2025-03-10 23:59:27 | INFO  | Trying to run play wipe-partitions in environment custom 2025-03-10 23:59:27.946313 | orchestrator | 2025-03-10 23:59:27 | INFO  | Task 599107ef-a7de-491f-bb48-8adf8d21fbfe (wipe-partitions) was prepared for execution. 2025-03-10 23:59:31.803337 | orchestrator | 2025-03-10 23:59:27 | INFO  | It takes a moment until task 599107ef-a7de-491f-bb48-8adf8d21fbfe (wipe-partitions) has been started and output is visible here. 2025-03-10 23:59:31.803479 | orchestrator | 2025-03-10 23:59:31.803786 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-03-10 23:59:31.803816 | orchestrator | 2025-03-10 23:59:31.803831 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-03-10 23:59:31.803852 | orchestrator | Monday 10 March 2025 23:59:31 +0000 (0:00:00.228) 0:00:00.228 ********** 2025-03-10 23:59:32.637641 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:59:32.638136 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:59:32.639336 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:59:32.640289 | orchestrator | 2025-03-10 23:59:32.640321 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-03-10 23:59:32.640715 | orchestrator | Monday 10 March 2025 23:59:32 +0000 (0:00:00.837) 0:00:01.066 ********** 2025-03-10 23:59:32.805196 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:59:32.918254 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:59:32.920580 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:59:32.922271 | orchestrator | 2025-03-10 23:59:32.923652 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-03-10 23:59:32.924816 | orchestrator | Monday 10 March 2025 23:59:32 +0000 (0:00:00.281) 0:00:01.347 ********** 2025-03-10 23:59:33.728541 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:59:33.731970 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:59:33.732034 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:59:33.732057 | orchestrator | 2025-03-10 23:59:33.732310 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-03-10 23:59:33.732575 | orchestrator | Monday 10 March 2025 23:59:33 +0000 (0:00:00.808) 0:00:02.156 ********** 2025-03-10 23:59:33.908449 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:59:34.012930 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:59:34.015621 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:59:34.016230 | orchestrator | 2025-03-10 23:59:34.016258 | orchestrator | TASK [Check device availability] *********************************************** 2025-03-10 23:59:34.016299 | orchestrator | Monday 10 March 2025 23:59:34 +0000 (0:00:00.288) 0:00:02.444 ********** 2025-03-10 23:59:35.328846 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-10 23:59:35.329056 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-10 23:59:35.329272 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-10 23:59:35.329613 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-10 23:59:35.329875 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-10 23:59:35.330361 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-10 23:59:35.333376 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-10 23:59:35.333699 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-10 23:59:35.334897 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-10 23:59:35.335205 | orchestrator | 2025-03-10 23:59:35.335697 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-03-10 23:59:35.335729 | orchestrator | Monday 10 March 2025 23:59:35 +0000 (0:00:01.316) 0:00:03.760 ********** 2025-03-10 23:59:36.839786 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-03-10 23:59:36.839968 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-03-10 23:59:36.840085 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-03-10 23:59:36.843081 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-03-10 23:59:36.844788 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-03-10 23:59:36.844910 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-03-10 23:59:36.845376 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-03-10 23:59:36.845596 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-03-10 23:59:36.848081 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-03-10 23:59:36.848544 | orchestrator | 2025-03-10 23:59:36.848674 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-03-10 23:59:36.848744 | orchestrator | Monday 10 March 2025 23:59:36 +0000 (0:00:01.507) 0:00:05.268 ********** 2025-03-10 23:59:40.409754 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-10 23:59:40.409931 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-10 23:59:40.411123 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-10 23:59:40.411794 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-10 23:59:40.414061 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-10 23:59:40.414776 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-10 23:59:40.414803 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-10 23:59:40.414825 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-10 23:59:40.415217 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-10 23:59:40.415494 | orchestrator | 2025-03-10 23:59:40.416136 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-03-10 23:59:40.416903 | orchestrator | Monday 10 March 2025 23:59:40 +0000 (0:00:03.572) 0:00:08.840 ********** 2025-03-10 23:59:41.069301 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:59:41.069657 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:59:41.073823 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:59:41.717354 | orchestrator | 2025-03-10 23:59:41.717465 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-03-10 23:59:41.717485 | orchestrator | Monday 10 March 2025 23:59:41 +0000 (0:00:00.655) 0:00:09.495 ********** 2025-03-10 23:59:41.717516 | orchestrator | changed: [testbed-node-3] 2025-03-10 23:59:41.719077 | orchestrator | changed: [testbed-node-4] 2025-03-10 23:59:41.720344 | orchestrator | changed: [testbed-node-5] 2025-03-10 23:59:41.720374 | orchestrator | 2025-03-10 23:59:41.721233 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:59:41.721897 | orchestrator | 2025-03-10 23:59:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:59:41.723771 | orchestrator | 2025-03-10 23:59:41 | INFO  | Please wait and do not abort execution. 2025-03-10 23:59:41.723804 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:41.723865 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:41.725136 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:41.725576 | orchestrator | 2025-03-10 23:59:41.726106 | orchestrator | 2025-03-10 23:59:41.726679 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:59:41.726776 | orchestrator | Monday 10 March 2025 23:59:41 +0000 (0:00:00.650) 0:00:10.146 ********** 2025-03-10 23:59:41.727327 | orchestrator | =============================================================================== 2025-03-10 23:59:41.727679 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.57s 2025-03-10 23:59:41.727845 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.51s 2025-03-10 23:59:41.728654 | orchestrator | Check device availability ----------------------------------------------- 1.32s 2025-03-10 23:59:41.729021 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.84s 2025-03-10 23:59:41.729119 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.81s 2025-03-10 23:59:41.729737 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2025-03-10 23:59:41.729932 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-03-10 23:59:41.731129 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2025-03-10 23:59:44.433045 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-03-10 23:59:44.433183 | orchestrator | 2025-03-10 23:59:44 | INFO  | Task ccc49ae8-f69d-4001-b162-2adbda632518 (facts) was prepared for execution. 2025-03-10 23:59:48.235411 | orchestrator | 2025-03-10 23:59:44 | INFO  | It takes a moment until task ccc49ae8-f69d-4001-b162-2adbda632518 (facts) has been started and output is visible here. 2025-03-10 23:59:48.235586 | orchestrator | 2025-03-10 23:59:48.235673 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-10 23:59:48.236170 | orchestrator | 2025-03-10 23:59:48.236202 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-10 23:59:48.237120 | orchestrator | Monday 10 March 2025 23:59:48 +0000 (0:00:00.238) 0:00:00.238 ********** 2025-03-10 23:59:49.494695 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:59:49.494913 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:59:49.496931 | orchestrator | ok: [testbed-manager] 2025-03-10 23:59:49.500665 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:59:49.501267 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:59:49.501296 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:59:49.502503 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:59:49.503655 | orchestrator | 2025-03-10 23:59:49.504703 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-10 23:59:49.505432 | orchestrator | Monday 10 March 2025 23:59:49 +0000 (0:00:01.257) 0:00:01.495 ********** 2025-03-10 23:59:49.682847 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:59:49.802893 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:59:49.914770 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:59:50.015429 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:59:50.124938 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:59:50.952677 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:59:50.956100 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:59:50.956179 | orchestrator | 2025-03-10 23:59:50.957323 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-10 23:59:50.958946 | orchestrator | 2025-03-10 23:59:50.960785 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-10 23:59:50.962192 | orchestrator | Monday 10 March 2025 23:59:50 +0000 (0:00:01.460) 0:00:02.956 ********** 2025-03-10 23:59:57.127854 | orchestrator | ok: [testbed-node-2] 2025-03-10 23:59:57.128124 | orchestrator | ok: [testbed-node-1] 2025-03-10 23:59:57.128918 | orchestrator | ok: [testbed-node-0] 2025-03-10 23:59:57.130213 | orchestrator | ok: [testbed-node-5] 2025-03-10 23:59:57.130673 | orchestrator | ok: [testbed-node-4] 2025-03-10 23:59:57.131149 | orchestrator | ok: [testbed-manager] 2025-03-10 23:59:57.131790 | orchestrator | ok: [testbed-node-3] 2025-03-10 23:59:57.132766 | orchestrator | 2025-03-10 23:59:57.133387 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-10 23:59:57.133570 | orchestrator | 2025-03-10 23:59:57.133673 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-10 23:59:57.134109 | orchestrator | Monday 10 March 2025 23:59:57 +0000 (0:00:06.177) 0:00:09.134 ********** 2025-03-10 23:59:57.316547 | orchestrator | skipping: [testbed-manager] 2025-03-10 23:59:57.417249 | orchestrator | skipping: [testbed-node-0] 2025-03-10 23:59:57.499557 | orchestrator | skipping: [testbed-node-1] 2025-03-10 23:59:57.595916 | orchestrator | skipping: [testbed-node-2] 2025-03-10 23:59:57.684604 | orchestrator | skipping: [testbed-node-3] 2025-03-10 23:59:57.732066 | orchestrator | skipping: [testbed-node-4] 2025-03-10 23:59:57.732618 | orchestrator | skipping: [testbed-node-5] 2025-03-10 23:59:57.734069 | orchestrator | 2025-03-10 23:59:57.734739 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-10 23:59:57.735700 | orchestrator | 2025-03-10 23:59:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-10 23:59:57.736662 | orchestrator | 2025-03-10 23:59:57 | INFO  | Please wait and do not abort execution. 2025-03-10 23:59:57.736696 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:57.737758 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:57.740245 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:57.740596 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:57.740618 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:57.740659 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:57.740673 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-10 23:59:57.740690 | orchestrator | 2025-03-10 23:59:57.741249 | orchestrator | 2025-03-10 23:59:57.741473 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-10 23:59:57.742049 | orchestrator | Monday 10 March 2025 23:59:57 +0000 (0:00:00.603) 0:00:09.738 ********** 2025-03-10 23:59:57.742490 | orchestrator | =============================================================================== 2025-03-10 23:59:57.742825 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.18s 2025-03-10 23:59:57.743571 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2025-03-10 23:59:57.744388 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.26s 2025-03-10 23:59:57.744416 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-03-11 00:00:00.579568 | orchestrator | 2025-03-11 00:00:00 | INFO  | Task a9725660-c571-4238-bf99-d45063b1eee6 (ceph-configure-lvm-volumes) was prepared for execution. 2025-03-11 00:00:00.582995 | orchestrator | 2025-03-11 00:00:00 | INFO  | It takes a moment until task a9725660-c571-4238-bf99-d45063b1eee6 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-03-11 00:00:05.747096 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-11 00:00:06.777820 | orchestrator | 2025-03-11 00:00:06.780365 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-11 00:00:06.781471 | orchestrator | 2025-03-11 00:00:06.783954 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 00:00:06.786550 | orchestrator | Tuesday 11 March 2025 00:00:06 +0000 (0:00:00.894) 0:00:00.894 ********* 2025-03-11 00:00:07.168463 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-11 00:00:07.169274 | orchestrator | 2025-03-11 00:00:07.171261 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 00:00:07.172272 | orchestrator | Tuesday 11 March 2025 00:00:07 +0000 (0:00:00.393) 0:00:01.288 ********* 2025-03-11 00:00:07.459968 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:00:07.462906 | orchestrator | 2025-03-11 00:00:07.463086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:07.465131 | orchestrator | Tuesday 11 March 2025 00:00:07 +0000 (0:00:00.289) 0:00:01.577 ********* 2025-03-11 00:00:08.219057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-11 00:00:08.219773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-11 00:00:08.220122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-11 00:00:08.221241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-11 00:00:08.225046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-11 00:00:08.225408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-11 00:00:08.225435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-11 00:00:08.225449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-11 00:00:08.225464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-11 00:00:08.225483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-11 00:00:08.226131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-11 00:00:08.226376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-11 00:00:08.227050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-11 00:00:08.227872 | orchestrator | 2025-03-11 00:00:08.229186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:08.229823 | orchestrator | Tuesday 11 March 2025 00:00:08 +0000 (0:00:00.763) 0:00:02.340 ********* 2025-03-11 00:00:08.436063 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:08.436251 | orchestrator | 2025-03-11 00:00:08.755630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:08.755729 | orchestrator | Tuesday 11 March 2025 00:00:08 +0000 (0:00:00.213) 0:00:02.554 ********* 2025-03-11 00:00:08.755759 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:08.756738 | orchestrator | 2025-03-11 00:00:08.757523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:08.758586 | orchestrator | Tuesday 11 March 2025 00:00:08 +0000 (0:00:00.318) 0:00:02.872 ********* 2025-03-11 00:00:08.943897 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:08.945410 | orchestrator | 2025-03-11 00:00:08.945557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:08.945585 | orchestrator | Tuesday 11 March 2025 00:00:08 +0000 (0:00:00.193) 0:00:03.066 ********* 2025-03-11 00:00:09.234880 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:09.237964 | orchestrator | 2025-03-11 00:00:09.238353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:09.238877 | orchestrator | Tuesday 11 March 2025 00:00:09 +0000 (0:00:00.288) 0:00:03.355 ********* 2025-03-11 00:00:09.506217 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:09.506626 | orchestrator | 2025-03-11 00:00:09.509752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:09.510266 | orchestrator | Tuesday 11 March 2025 00:00:09 +0000 (0:00:00.271) 0:00:03.626 ********* 2025-03-11 00:00:09.779961 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:09.780141 | orchestrator | 2025-03-11 00:00:09.780596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:10.005368 | orchestrator | Tuesday 11 March 2025 00:00:09 +0000 (0:00:00.275) 0:00:03.902 ********* 2025-03-11 00:00:10.005525 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:10.005600 | orchestrator | 2025-03-11 00:00:10.006802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:10.007498 | orchestrator | Tuesday 11 March 2025 00:00:10 +0000 (0:00:00.223) 0:00:04.125 ********* 2025-03-11 00:00:10.276785 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:10.279207 | orchestrator | 2025-03-11 00:00:10.280071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:10.282109 | orchestrator | Tuesday 11 March 2025 00:00:10 +0000 (0:00:00.272) 0:00:04.398 ********* 2025-03-11 00:00:11.361138 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d67a3c24-b729-45e0-8397-e020ae3d0e20) 2025-03-11 00:00:11.361318 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d67a3c24-b729-45e0-8397-e020ae3d0e20) 2025-03-11 00:00:11.361966 | orchestrator | 2025-03-11 00:00:11.362792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:11.363536 | orchestrator | Tuesday 11 March 2025 00:00:11 +0000 (0:00:01.081) 0:00:05.479 ********* 2025-03-11 00:00:12.012160 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e87dbd84-35d6-4b7e-85dc-79bdef85b968) 2025-03-11 00:00:12.013386 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e87dbd84-35d6-4b7e-85dc-79bdef85b968) 2025-03-11 00:00:12.015478 | orchestrator | 2025-03-11 00:00:12.016122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:12.016715 | orchestrator | Tuesday 11 March 2025 00:00:12 +0000 (0:00:00.644) 0:00:06.123 ********* 2025-03-11 00:00:12.761121 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f8c648b-56e1-45e0-bc37-3aa283872edf) 2025-03-11 00:00:12.761283 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f8c648b-56e1-45e0-bc37-3aa283872edf) 2025-03-11 00:00:12.764019 | orchestrator | 2025-03-11 00:00:12.764350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:12.764730 | orchestrator | Tuesday 11 March 2025 00:00:12 +0000 (0:00:00.758) 0:00:06.881 ********* 2025-03-11 00:00:13.321181 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a54aab5-53b8-4264-89d4-baa19ed5d083) 2025-03-11 00:00:13.321379 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a54aab5-53b8-4264-89d4-baa19ed5d083) 2025-03-11 00:00:13.321792 | orchestrator | 2025-03-11 00:00:13.322416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:13.324799 | orchestrator | Tuesday 11 March 2025 00:00:13 +0000 (0:00:00.558) 0:00:07.440 ********* 2025-03-11 00:00:13.903741 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 00:00:13.905816 | orchestrator | 2025-03-11 00:00:14.446901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:14.447059 | orchestrator | Tuesday 11 March 2025 00:00:13 +0000 (0:00:00.582) 0:00:08.023 ********* 2025-03-11 00:00:14.447093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-11 00:00:14.447317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-11 00:00:14.447350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-11 00:00:14.448214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-11 00:00:14.449271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-11 00:00:14.450152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-11 00:00:14.452758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-11 00:00:14.455169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-11 00:00:14.455793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-11 00:00:14.456266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-11 00:00:14.457101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-11 00:00:14.457854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-11 00:00:14.458948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-11 00:00:14.459262 | orchestrator | 2025-03-11 00:00:14.459614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:14.460319 | orchestrator | Tuesday 11 March 2025 00:00:14 +0000 (0:00:00.542) 0:00:08.565 ********* 2025-03-11 00:00:14.672576 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:14.674400 | orchestrator | 2025-03-11 00:00:14.674990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:14.679035 | orchestrator | Tuesday 11 March 2025 00:00:14 +0000 (0:00:00.226) 0:00:08.792 ********* 2025-03-11 00:00:14.933316 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:14.935635 | orchestrator | 2025-03-11 00:00:15.220082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:15.220198 | orchestrator | Tuesday 11 March 2025 00:00:14 +0000 (0:00:00.262) 0:00:09.054 ********* 2025-03-11 00:00:15.220227 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:15.221871 | orchestrator | 2025-03-11 00:00:15.222645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:15.223098 | orchestrator | Tuesday 11 March 2025 00:00:15 +0000 (0:00:00.284) 0:00:09.339 ********* 2025-03-11 00:00:15.931340 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:15.933273 | orchestrator | 2025-03-11 00:00:15.935306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:15.936163 | orchestrator | Tuesday 11 March 2025 00:00:15 +0000 (0:00:00.711) 0:00:10.050 ********* 2025-03-11 00:00:16.172080 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:16.172863 | orchestrator | 2025-03-11 00:00:16.173944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:16.175082 | orchestrator | Tuesday 11 March 2025 00:00:16 +0000 (0:00:00.242) 0:00:10.292 ********* 2025-03-11 00:00:16.479673 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:16.480787 | orchestrator | 2025-03-11 00:00:16.483122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:16.483823 | orchestrator | Tuesday 11 March 2025 00:00:16 +0000 (0:00:00.306) 0:00:10.598 ********* 2025-03-11 00:00:16.798167 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:16.799243 | orchestrator | 2025-03-11 00:00:16.801154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:16.802314 | orchestrator | Tuesday 11 March 2025 00:00:16 +0000 (0:00:00.316) 0:00:10.914 ********* 2025-03-11 00:00:17.050720 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:17.051662 | orchestrator | 2025-03-11 00:00:17.052718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:17.053117 | orchestrator | Tuesday 11 March 2025 00:00:17 +0000 (0:00:00.256) 0:00:11.171 ********* 2025-03-11 00:00:18.067281 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-11 00:00:18.067416 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-11 00:00:18.067441 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-11 00:00:18.067760 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-11 00:00:18.068368 | orchestrator | 2025-03-11 00:00:18.068910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:18.069715 | orchestrator | Tuesday 11 March 2025 00:00:18 +0000 (0:00:01.012) 0:00:12.184 ********* 2025-03-11 00:00:18.502533 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:18.502762 | orchestrator | 2025-03-11 00:00:18.502799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:18.503521 | orchestrator | Tuesday 11 March 2025 00:00:18 +0000 (0:00:00.437) 0:00:12.621 ********* 2025-03-11 00:00:18.774718 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:18.775296 | orchestrator | 2025-03-11 00:00:18.775462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:18.775493 | orchestrator | Tuesday 11 March 2025 00:00:18 +0000 (0:00:00.273) 0:00:12.895 ********* 2025-03-11 00:00:19.088478 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:19.090120 | orchestrator | 2025-03-11 00:00:19.090692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:19.091059 | orchestrator | Tuesday 11 March 2025 00:00:19 +0000 (0:00:00.314) 0:00:13.209 ********* 2025-03-11 00:00:19.401116 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:19.402514 | orchestrator | 2025-03-11 00:00:19.402625 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-11 00:00:19.402649 | orchestrator | Tuesday 11 March 2025 00:00:19 +0000 (0:00:00.307) 0:00:13.517 ********* 2025-03-11 00:00:19.714381 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-03-11 00:00:19.718541 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-03-11 00:00:19.719161 | orchestrator | 2025-03-11 00:00:19.720124 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-11 00:00:19.720416 | orchestrator | Tuesday 11 March 2025 00:00:19 +0000 (0:00:00.312) 0:00:13.830 ********* 2025-03-11 00:00:20.278250 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:20.280147 | orchestrator | 2025-03-11 00:00:20.442897 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-11 00:00:20.443019 | orchestrator | Tuesday 11 March 2025 00:00:20 +0000 (0:00:00.568) 0:00:14.398 ********* 2025-03-11 00:00:20.443042 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:20.443896 | orchestrator | 2025-03-11 00:00:20.445579 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-11 00:00:20.446951 | orchestrator | Tuesday 11 March 2025 00:00:20 +0000 (0:00:00.162) 0:00:14.561 ********* 2025-03-11 00:00:20.680613 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:20.681791 | orchestrator | 2025-03-11 00:00:20.683096 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-11 00:00:20.683629 | orchestrator | Tuesday 11 March 2025 00:00:20 +0000 (0:00:00.237) 0:00:14.798 ********* 2025-03-11 00:00:20.888180 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:00:20.890435 | orchestrator | 2025-03-11 00:00:20.891401 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-11 00:00:20.894493 | orchestrator | Tuesday 11 March 2025 00:00:20 +0000 (0:00:00.202) 0:00:15.001 ********* 2025-03-11 00:00:21.163714 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}}) 2025-03-11 00:00:21.164039 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25356c17-92f4-5cd4-84bc-9f6437381575'}}) 2025-03-11 00:00:21.164090 | orchestrator | 2025-03-11 00:00:21.168675 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-11 00:00:21.170651 | orchestrator | Tuesday 11 March 2025 00:00:21 +0000 (0:00:00.276) 0:00:15.277 ********* 2025-03-11 00:00:21.385917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}})  2025-03-11 00:00:21.387590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25356c17-92f4-5cd4-84bc-9f6437381575'}})  2025-03-11 00:00:21.391225 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:21.662761 | orchestrator | 2025-03-11 00:00:21.662865 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-11 00:00:21.662884 | orchestrator | Tuesday 11 March 2025 00:00:21 +0000 (0:00:00.228) 0:00:15.506 ********* 2025-03-11 00:00:21.662915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}})  2025-03-11 00:00:21.663800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25356c17-92f4-5cd4-84bc-9f6437381575'}})  2025-03-11 00:00:21.664807 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:21.664899 | orchestrator | 2025-03-11 00:00:21.664924 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-11 00:00:21.665452 | orchestrator | Tuesday 11 March 2025 00:00:21 +0000 (0:00:00.277) 0:00:15.783 ********* 2025-03-11 00:00:21.897518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}})  2025-03-11 00:00:21.897859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25356c17-92f4-5cd4-84bc-9f6437381575'}})  2025-03-11 00:00:21.898633 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:21.899681 | orchestrator | 2025-03-11 00:00:21.900365 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-11 00:00:21.900857 | orchestrator | Tuesday 11 March 2025 00:00:21 +0000 (0:00:00.234) 0:00:16.018 ********* 2025-03-11 00:00:22.064513 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:00:22.064694 | orchestrator | 2025-03-11 00:00:22.065806 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-11 00:00:22.066683 | orchestrator | Tuesday 11 March 2025 00:00:22 +0000 (0:00:00.167) 0:00:16.185 ********* 2025-03-11 00:00:22.222575 | orchestrator | ok: [testbed-node-3] 2025-03-11 00:00:22.223266 | orchestrator | 2025-03-11 00:00:22.224669 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-11 00:00:22.414767 | orchestrator | Tuesday 11 March 2025 00:00:22 +0000 (0:00:00.158) 0:00:16.344 ********* 2025-03-11 00:00:22.414889 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:22.416110 | orchestrator | 2025-03-11 00:00:22.416145 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-11 00:00:22.416892 | orchestrator | Tuesday 11 March 2025 00:00:22 +0000 (0:00:00.191) 0:00:16.535 ********* 2025-03-11 00:00:22.706540 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:22.706707 | orchestrator | 2025-03-11 00:00:22.707044 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-11 00:00:22.707483 | orchestrator | Tuesday 11 March 2025 00:00:22 +0000 (0:00:00.292) 0:00:16.828 ********* 2025-03-11 00:00:22.876299 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:22.877565 | orchestrator | 2025-03-11 00:00:22.878702 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-11 00:00:22.879409 | orchestrator | Tuesday 11 March 2025 00:00:22 +0000 (0:00:00.168) 0:00:16.996 ********* 2025-03-11 00:00:23.033818 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 00:00:23.034457 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:00:23.035263 | orchestrator |  "sdb": { 2025-03-11 00:00:23.040887 | orchestrator |  "osd_lvm_uuid": "f72f4ade-bca7-59b7-8aa7-c340bc3ca60b" 2025-03-11 00:00:23.041438 | orchestrator |  }, 2025-03-11 00:00:23.041873 | orchestrator |  "sdc": { 2025-03-11 00:00:23.042563 | orchestrator |  "osd_lvm_uuid": "25356c17-92f4-5cd4-84bc-9f6437381575" 2025-03-11 00:00:23.042927 | orchestrator |  } 2025-03-11 00:00:23.043428 | orchestrator |  } 2025-03-11 00:00:23.043808 | orchestrator | } 2025-03-11 00:00:23.044228 | orchestrator | 2025-03-11 00:00:23.044720 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-11 00:00:23.045087 | orchestrator | Tuesday 11 March 2025 00:00:23 +0000 (0:00:00.158) 0:00:17.155 ********* 2025-03-11 00:00:23.163209 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:23.163505 | orchestrator | 2025-03-11 00:00:23.164191 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-11 00:00:23.164498 | orchestrator | Tuesday 11 March 2025 00:00:23 +0000 (0:00:00.129) 0:00:17.284 ********* 2025-03-11 00:00:23.299018 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:23.299820 | orchestrator | 2025-03-11 00:00:23.474490 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-11 00:00:23.474567 | orchestrator | Tuesday 11 March 2025 00:00:23 +0000 (0:00:00.135) 0:00:17.420 ********* 2025-03-11 00:00:23.474596 | orchestrator | skipping: [testbed-node-3] 2025-03-11 00:00:23.475693 | orchestrator | 2025-03-11 00:00:23.476597 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-11 00:00:23.476705 | orchestrator | Tuesday 11 March 2025 00:00:23 +0000 (0:00:00.174) 0:00:17.594 ********* 2025-03-11 00:00:23.785284 | orchestrator | changed: [testbed-node-3] => { 2025-03-11 00:00:23.785519 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-11 00:00:23.786307 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:00:23.787128 | orchestrator |  "sdb": { 2025-03-11 00:00:23.787478 | orchestrator |  "osd_lvm_uuid": "f72f4ade-bca7-59b7-8aa7-c340bc3ca60b" 2025-03-11 00:00:23.788096 | orchestrator |  }, 2025-03-11 00:00:23.788349 | orchestrator |  "sdc": { 2025-03-11 00:00:23.788686 | orchestrator |  "osd_lvm_uuid": "25356c17-92f4-5cd4-84bc-9f6437381575" 2025-03-11 00:00:23.789357 | orchestrator |  } 2025-03-11 00:00:23.789593 | orchestrator |  }, 2025-03-11 00:00:23.790200 | orchestrator |  "lvm_volumes": [ 2025-03-11 00:00:23.790464 | orchestrator |  { 2025-03-11 00:00:23.790768 | orchestrator |  "data": "osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b", 2025-03-11 00:00:23.791074 | orchestrator |  "data_vg": "ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b" 2025-03-11 00:00:23.791402 | orchestrator |  }, 2025-03-11 00:00:23.791641 | orchestrator |  { 2025-03-11 00:00:23.792130 | orchestrator |  "data": "osd-block-25356c17-92f4-5cd4-84bc-9f6437381575", 2025-03-11 00:00:23.792219 | orchestrator |  "data_vg": "ceph-25356c17-92f4-5cd4-84bc-9f6437381575" 2025-03-11 00:00:23.792601 | orchestrator |  } 2025-03-11 00:00:23.792903 | orchestrator |  ] 2025-03-11 00:00:23.796487 | orchestrator |  } 2025-03-11 00:00:25.969137 | orchestrator | } 2025-03-11 00:00:25.969255 | orchestrator | 2025-03-11 00:00:25.969274 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-11 00:00:25.969291 | orchestrator | Tuesday 11 March 2025 00:00:23 +0000 (0:00:00.308) 0:00:17.903 ********* 2025-03-11 00:00:25.969323 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-11 00:00:25.969388 | orchestrator | 2025-03-11 00:00:25.969406 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-11 00:00:25.969421 | orchestrator | 2025-03-11 00:00:25.969435 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 00:00:25.969453 | orchestrator | Tuesday 11 March 2025 00:00:25 +0000 (0:00:02.184) 0:00:20.088 ********* 2025-03-11 00:00:26.213798 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-11 00:00:26.214940 | orchestrator | 2025-03-11 00:00:26.215088 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 00:00:26.215116 | orchestrator | Tuesday 11 March 2025 00:00:26 +0000 (0:00:00.247) 0:00:20.335 ********* 2025-03-11 00:00:26.423670 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:00:26.423815 | orchestrator | 2025-03-11 00:00:26.424104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:26.425293 | orchestrator | Tuesday 11 March 2025 00:00:26 +0000 (0:00:00.209) 0:00:20.545 ********* 2025-03-11 00:00:26.788803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-11 00:00:26.789780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-11 00:00:26.791283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-11 00:00:26.792100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-11 00:00:26.792704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-11 00:00:26.793627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-11 00:00:26.794995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-11 00:00:26.795876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-11 00:00:26.796338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-11 00:00:26.797265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-11 00:00:26.797777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-11 00:00:26.798654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-11 00:00:26.800328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-11 00:00:26.800759 | orchestrator | 2025-03-11 00:00:26.800817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:26.800880 | orchestrator | Tuesday 11 March 2025 00:00:26 +0000 (0:00:00.362) 0:00:20.907 ********* 2025-03-11 00:00:26.993331 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:26.994523 | orchestrator | 2025-03-11 00:00:26.994561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:26.994850 | orchestrator | Tuesday 11 March 2025 00:00:26 +0000 (0:00:00.204) 0:00:21.111 ********* 2025-03-11 00:00:27.174404 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:27.177495 | orchestrator | 2025-03-11 00:00:27.650825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:27.650926 | orchestrator | Tuesday 11 March 2025 00:00:27 +0000 (0:00:00.183) 0:00:21.295 ********* 2025-03-11 00:00:27.651012 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:27.651077 | orchestrator | 2025-03-11 00:00:27.651721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:27.652151 | orchestrator | Tuesday 11 March 2025 00:00:27 +0000 (0:00:00.476) 0:00:21.772 ********* 2025-03-11 00:00:27.859405 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:27.860807 | orchestrator | 2025-03-11 00:00:27.864892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:27.866148 | orchestrator | Tuesday 11 March 2025 00:00:27 +0000 (0:00:00.207) 0:00:21.980 ********* 2025-03-11 00:00:28.065609 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:28.066322 | orchestrator | 2025-03-11 00:00:28.067841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:28.070205 | orchestrator | Tuesday 11 March 2025 00:00:28 +0000 (0:00:00.203) 0:00:22.183 ********* 2025-03-11 00:00:28.295509 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:28.296444 | orchestrator | 2025-03-11 00:00:28.296482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:28.528369 | orchestrator | Tuesday 11 March 2025 00:00:28 +0000 (0:00:00.231) 0:00:22.415 ********* 2025-03-11 00:00:28.528497 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:28.528559 | orchestrator | 2025-03-11 00:00:28.528580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:28.528801 | orchestrator | Tuesday 11 March 2025 00:00:28 +0000 (0:00:00.233) 0:00:22.649 ********* 2025-03-11 00:00:28.760381 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:28.760822 | orchestrator | 2025-03-11 00:00:28.761505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:28.761877 | orchestrator | Tuesday 11 March 2025 00:00:28 +0000 (0:00:00.231) 0:00:22.880 ********* 2025-03-11 00:00:29.312856 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c66825c-4af4-4039-9be2-0884ea12c780) 2025-03-11 00:00:29.313717 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c66825c-4af4-4039-9be2-0884ea12c780) 2025-03-11 00:00:29.314654 | orchestrator | 2025-03-11 00:00:29.315588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:29.316063 | orchestrator | Tuesday 11 March 2025 00:00:29 +0000 (0:00:00.539) 0:00:23.420 ********* 2025-03-11 00:00:29.781443 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_663fc4ce-d59c-4a76-8f0a-41179b606a99) 2025-03-11 00:00:30.332391 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_663fc4ce-d59c-4a76-8f0a-41179b606a99) 2025-03-11 00:00:30.332496 | orchestrator | 2025-03-11 00:00:30.332514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:30.332530 | orchestrator | Tuesday 11 March 2025 00:00:29 +0000 (0:00:00.479) 0:00:23.900 ********* 2025-03-11 00:00:30.332559 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96f3a3bc-1bc2-4311-aa73-ad4d834104c1) 2025-03-11 00:00:30.333112 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96f3a3bc-1bc2-4311-aa73-ad4d834104c1) 2025-03-11 00:00:30.334541 | orchestrator | 2025-03-11 00:00:30.334581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:30.335032 | orchestrator | Tuesday 11 March 2025 00:00:30 +0000 (0:00:00.551) 0:00:24.451 ********* 2025-03-11 00:00:31.121352 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_35890ba3-27d1-4ca1-853f-43468bc69b0e) 2025-03-11 00:00:31.121515 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_35890ba3-27d1-4ca1-853f-43468bc69b0e) 2025-03-11 00:00:31.122564 | orchestrator | 2025-03-11 00:00:31.123468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:31.124302 | orchestrator | Tuesday 11 March 2025 00:00:31 +0000 (0:00:00.789) 0:00:25.240 ********* 2025-03-11 00:00:31.962839 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 00:00:31.963359 | orchestrator | 2025-03-11 00:00:31.963408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:31.964109 | orchestrator | Tuesday 11 March 2025 00:00:31 +0000 (0:00:00.841) 0:00:26.082 ********* 2025-03-11 00:00:32.431049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-11 00:00:32.432085 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-11 00:00:32.436421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-11 00:00:32.440761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-11 00:00:32.441067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-11 00:00:32.441093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-11 00:00:32.441108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-11 00:00:32.441123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-11 00:00:32.441137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-11 00:00:32.441151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-11 00:00:32.441165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-11 00:00:32.441184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-11 00:00:32.441640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-11 00:00:32.442612 | orchestrator | 2025-03-11 00:00:32.443152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:32.443207 | orchestrator | Tuesday 11 March 2025 00:00:32 +0000 (0:00:00.467) 0:00:26.549 ********* 2025-03-11 00:00:32.661614 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:32.663666 | orchestrator | 2025-03-11 00:00:32.663896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:32.664486 | orchestrator | Tuesday 11 March 2025 00:00:32 +0000 (0:00:00.231) 0:00:26.781 ********* 2025-03-11 00:00:32.876194 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:32.876429 | orchestrator | 2025-03-11 00:00:32.877376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:32.878144 | orchestrator | Tuesday 11 March 2025 00:00:32 +0000 (0:00:00.214) 0:00:26.995 ********* 2025-03-11 00:00:33.104264 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:33.106123 | orchestrator | 2025-03-11 00:00:33.106321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:33.107636 | orchestrator | Tuesday 11 March 2025 00:00:33 +0000 (0:00:00.226) 0:00:27.222 ********* 2025-03-11 00:00:33.342855 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:33.344241 | orchestrator | 2025-03-11 00:00:33.348916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:33.349163 | orchestrator | Tuesday 11 March 2025 00:00:33 +0000 (0:00:00.241) 0:00:27.463 ********* 2025-03-11 00:00:33.554826 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:33.556226 | orchestrator | 2025-03-11 00:00:33.558792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:33.560004 | orchestrator | Tuesday 11 March 2025 00:00:33 +0000 (0:00:00.211) 0:00:27.675 ********* 2025-03-11 00:00:33.770199 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:33.770523 | orchestrator | 2025-03-11 00:00:33.770873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:33.771268 | orchestrator | Tuesday 11 March 2025 00:00:33 +0000 (0:00:00.215) 0:00:27.891 ********* 2025-03-11 00:00:33.984294 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:33.986090 | orchestrator | 2025-03-11 00:00:33.986381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:33.987677 | orchestrator | Tuesday 11 March 2025 00:00:33 +0000 (0:00:00.212) 0:00:28.103 ********* 2025-03-11 00:00:34.210607 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:34.211014 | orchestrator | 2025-03-11 00:00:34.211050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:34.211074 | orchestrator | Tuesday 11 March 2025 00:00:34 +0000 (0:00:00.225) 0:00:28.329 ********* 2025-03-11 00:00:35.165629 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-11 00:00:35.166479 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-11 00:00:35.167610 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-11 00:00:35.169509 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-11 00:00:35.170249 | orchestrator | 2025-03-11 00:00:35.170918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:35.171808 | orchestrator | Tuesday 11 March 2025 00:00:35 +0000 (0:00:00.955) 0:00:29.285 ********* 2025-03-11 00:00:35.409224 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:35.409314 | orchestrator | 2025-03-11 00:00:35.410314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:35.410701 | orchestrator | Tuesday 11 March 2025 00:00:35 +0000 (0:00:00.243) 0:00:29.528 ********* 2025-03-11 00:00:35.615681 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:35.616141 | orchestrator | 2025-03-11 00:00:35.616782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:35.616816 | orchestrator | Tuesday 11 March 2025 00:00:35 +0000 (0:00:00.206) 0:00:29.735 ********* 2025-03-11 00:00:35.827608 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:35.827735 | orchestrator | 2025-03-11 00:00:35.827795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:35.828676 | orchestrator | Tuesday 11 March 2025 00:00:35 +0000 (0:00:00.212) 0:00:29.948 ********* 2025-03-11 00:00:36.060368 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:36.061596 | orchestrator | 2025-03-11 00:00:36.062883 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-11 00:00:36.063822 | orchestrator | Tuesday 11 March 2025 00:00:36 +0000 (0:00:00.232) 0:00:30.181 ********* 2025-03-11 00:00:36.289116 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-03-11 00:00:36.290205 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-03-11 00:00:36.290274 | orchestrator | 2025-03-11 00:00:36.291784 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-11 00:00:36.293834 | orchestrator | Tuesday 11 March 2025 00:00:36 +0000 (0:00:00.227) 0:00:30.408 ********* 2025-03-11 00:00:36.434246 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:36.434404 | orchestrator | 2025-03-11 00:00:36.434430 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-11 00:00:36.434742 | orchestrator | Tuesday 11 March 2025 00:00:36 +0000 (0:00:00.146) 0:00:30.555 ********* 2025-03-11 00:00:36.583054 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:36.583167 | orchestrator | 2025-03-11 00:00:36.583779 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-11 00:00:36.584406 | orchestrator | Tuesday 11 March 2025 00:00:36 +0000 (0:00:00.149) 0:00:30.704 ********* 2025-03-11 00:00:36.729805 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:36.731598 | orchestrator | 2025-03-11 00:00:36.731807 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-11 00:00:36.734910 | orchestrator | Tuesday 11 March 2025 00:00:36 +0000 (0:00:00.145) 0:00:30.849 ********* 2025-03-11 00:00:36.866244 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:00:36.867327 | orchestrator | 2025-03-11 00:00:36.869838 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-11 00:00:36.870152 | orchestrator | Tuesday 11 March 2025 00:00:36 +0000 (0:00:00.135) 0:00:30.985 ********* 2025-03-11 00:00:37.062501 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afd03ade-fddc-513b-974e-73ae3400739d'}}) 2025-03-11 00:00:37.063364 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2c164e24-0081-5461-8b83-1ef82bb0535c'}}) 2025-03-11 00:00:37.064667 | orchestrator | 2025-03-11 00:00:37.065135 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-11 00:00:37.065625 | orchestrator | Tuesday 11 March 2025 00:00:37 +0000 (0:00:00.195) 0:00:31.181 ********* 2025-03-11 00:00:37.471560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afd03ade-fddc-513b-974e-73ae3400739d'}})  2025-03-11 00:00:37.471761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2c164e24-0081-5461-8b83-1ef82bb0535c'}})  2025-03-11 00:00:37.472108 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:37.472145 | orchestrator | 2025-03-11 00:00:37.472381 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-11 00:00:37.473938 | orchestrator | Tuesday 11 March 2025 00:00:37 +0000 (0:00:00.407) 0:00:31.589 ********* 2025-03-11 00:00:37.657854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afd03ade-fddc-513b-974e-73ae3400739d'}})  2025-03-11 00:00:37.658576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2c164e24-0081-5461-8b83-1ef82bb0535c'}})  2025-03-11 00:00:37.659315 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:37.659367 | orchestrator | 2025-03-11 00:00:37.659390 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-11 00:00:37.659524 | orchestrator | Tuesday 11 March 2025 00:00:37 +0000 (0:00:00.189) 0:00:31.778 ********* 2025-03-11 00:00:37.840786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afd03ade-fddc-513b-974e-73ae3400739d'}})  2025-03-11 00:00:37.840955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2c164e24-0081-5461-8b83-1ef82bb0535c'}})  2025-03-11 00:00:37.841027 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:37.841461 | orchestrator | 2025-03-11 00:00:37.841705 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-11 00:00:37.842114 | orchestrator | Tuesday 11 March 2025 00:00:37 +0000 (0:00:00.183) 0:00:31.961 ********* 2025-03-11 00:00:37.987416 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:00:37.988144 | orchestrator | 2025-03-11 00:00:37.988509 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-11 00:00:37.988543 | orchestrator | Tuesday 11 March 2025 00:00:37 +0000 (0:00:00.146) 0:00:32.107 ********* 2025-03-11 00:00:38.144466 | orchestrator | ok: [testbed-node-4] 2025-03-11 00:00:38.144942 | orchestrator | 2025-03-11 00:00:38.145197 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-11 00:00:38.145536 | orchestrator | Tuesday 11 March 2025 00:00:38 +0000 (0:00:00.157) 0:00:32.265 ********* 2025-03-11 00:00:38.304603 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:38.305671 | orchestrator | 2025-03-11 00:00:38.306350 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-11 00:00:38.306624 | orchestrator | Tuesday 11 March 2025 00:00:38 +0000 (0:00:00.159) 0:00:32.425 ********* 2025-03-11 00:00:38.476627 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:38.476771 | orchestrator | 2025-03-11 00:00:38.476857 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-11 00:00:38.476881 | orchestrator | Tuesday 11 March 2025 00:00:38 +0000 (0:00:00.171) 0:00:32.597 ********* 2025-03-11 00:00:38.625311 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:38.626322 | orchestrator | 2025-03-11 00:00:38.627028 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-11 00:00:38.627255 | orchestrator | Tuesday 11 March 2025 00:00:38 +0000 (0:00:00.148) 0:00:32.745 ********* 2025-03-11 00:00:38.813672 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 00:00:38.814410 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:00:38.815128 | orchestrator |  "sdb": { 2025-03-11 00:00:38.816554 | orchestrator |  "osd_lvm_uuid": "afd03ade-fddc-513b-974e-73ae3400739d" 2025-03-11 00:00:38.816880 | orchestrator |  }, 2025-03-11 00:00:38.819663 | orchestrator |  "sdc": { 2025-03-11 00:00:38.819878 | orchestrator |  "osd_lvm_uuid": "2c164e24-0081-5461-8b83-1ef82bb0535c" 2025-03-11 00:00:38.819902 | orchestrator |  } 2025-03-11 00:00:38.819920 | orchestrator |  } 2025-03-11 00:00:38.819934 | orchestrator | } 2025-03-11 00:00:38.819954 | orchestrator | 2025-03-11 00:00:38.820268 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-11 00:00:38.820499 | orchestrator | Tuesday 11 March 2025 00:00:38 +0000 (0:00:00.187) 0:00:32.933 ********* 2025-03-11 00:00:38.956054 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:38.956214 | orchestrator | 2025-03-11 00:00:38.957566 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-11 00:00:38.958055 | orchestrator | Tuesday 11 March 2025 00:00:38 +0000 (0:00:00.142) 0:00:33.076 ********* 2025-03-11 00:00:39.130651 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:39.131163 | orchestrator | 2025-03-11 00:00:39.132812 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-11 00:00:39.133338 | orchestrator | Tuesday 11 March 2025 00:00:39 +0000 (0:00:00.174) 0:00:33.251 ********* 2025-03-11 00:00:39.272711 | orchestrator | skipping: [testbed-node-4] 2025-03-11 00:00:39.274308 | orchestrator | 2025-03-11 00:00:39.274490 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-11 00:00:39.274520 | orchestrator | Tuesday 11 March 2025 00:00:39 +0000 (0:00:00.138) 0:00:33.389 ********* 2025-03-11 00:00:39.811327 | orchestrator | changed: [testbed-node-4] => { 2025-03-11 00:00:39.812326 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-11 00:00:39.813116 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:00:39.813636 | orchestrator |  "sdb": { 2025-03-11 00:00:39.816716 | orchestrator |  "osd_lvm_uuid": "afd03ade-fddc-513b-974e-73ae3400739d" 2025-03-11 00:00:39.816964 | orchestrator |  }, 2025-03-11 00:00:39.817031 | orchestrator |  "sdc": { 2025-03-11 00:00:39.817046 | orchestrator |  "osd_lvm_uuid": "2c164e24-0081-5461-8b83-1ef82bb0535c" 2025-03-11 00:00:39.817061 | orchestrator |  } 2025-03-11 00:00:39.817076 | orchestrator |  }, 2025-03-11 00:00:39.817095 | orchestrator |  "lvm_volumes": [ 2025-03-11 00:00:39.817932 | orchestrator |  { 2025-03-11 00:00:39.818552 | orchestrator |  "data": "osd-block-afd03ade-fddc-513b-974e-73ae3400739d", 2025-03-11 00:00:39.819404 | orchestrator |  "data_vg": "ceph-afd03ade-fddc-513b-974e-73ae3400739d" 2025-03-11 00:00:39.819995 | orchestrator |  }, 2025-03-11 00:00:39.820558 | orchestrator |  { 2025-03-11 00:00:39.820868 | orchestrator |  "data": "osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c", 2025-03-11 00:00:39.821468 | orchestrator |  "data_vg": "ceph-2c164e24-0081-5461-8b83-1ef82bb0535c" 2025-03-11 00:00:39.821949 | orchestrator |  } 2025-03-11 00:00:39.822518 | orchestrator |  ] 2025-03-11 00:00:39.822856 | orchestrator |  } 2025-03-11 00:00:39.823327 | orchestrator | } 2025-03-11 00:00:39.823705 | orchestrator | 2025-03-11 00:00:39.824243 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-11 00:00:39.824912 | orchestrator | Tuesday 11 March 2025 00:00:39 +0000 (0:00:00.541) 0:00:33.931 ********* 2025-03-11 00:00:41.642314 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-11 00:00:41.642832 | orchestrator | 2025-03-11 00:00:41.643476 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-11 00:00:41.644254 | orchestrator | 2025-03-11 00:00:41.646880 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 00:00:41.647532 | orchestrator | Tuesday 11 March 2025 00:00:41 +0000 (0:00:01.831) 0:00:35.762 ********* 2025-03-11 00:00:42.161305 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-11 00:00:42.161484 | orchestrator | 2025-03-11 00:00:42.161544 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 00:00:42.162327 | orchestrator | Tuesday 11 March 2025 00:00:42 +0000 (0:00:00.519) 0:00:36.282 ********* 2025-03-11 00:00:42.536399 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:00:42.536582 | orchestrator | 2025-03-11 00:00:42.536611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:42.536828 | orchestrator | Tuesday 11 March 2025 00:00:42 +0000 (0:00:00.376) 0:00:36.658 ********* 2025-03-11 00:00:42.917622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-11 00:00:42.919394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-11 00:00:42.920583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-11 00:00:42.920891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-11 00:00:42.921281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-11 00:00:42.921541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-11 00:00:42.921910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-11 00:00:42.922697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-11 00:00:42.923094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-11 00:00:42.923125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-11 00:00:42.923327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-11 00:00:42.923851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-11 00:00:42.924043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-11 00:00:42.924319 | orchestrator | 2025-03-11 00:00:42.924815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:42.925145 | orchestrator | Tuesday 11 March 2025 00:00:42 +0000 (0:00:00.379) 0:00:37.037 ********* 2025-03-11 00:00:43.159707 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:43.160236 | orchestrator | 2025-03-11 00:00:43.160453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:43.160483 | orchestrator | Tuesday 11 March 2025 00:00:43 +0000 (0:00:00.244) 0:00:37.281 ********* 2025-03-11 00:00:43.361909 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:43.362163 | orchestrator | 2025-03-11 00:00:43.362198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:43.362635 | orchestrator | Tuesday 11 March 2025 00:00:43 +0000 (0:00:00.202) 0:00:37.483 ********* 2025-03-11 00:00:43.547428 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:43.548038 | orchestrator | 2025-03-11 00:00:43.549179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:43.550211 | orchestrator | Tuesday 11 March 2025 00:00:43 +0000 (0:00:00.184) 0:00:37.668 ********* 2025-03-11 00:00:43.750247 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:43.750789 | orchestrator | 2025-03-11 00:00:43.751113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:43.753463 | orchestrator | Tuesday 11 March 2025 00:00:43 +0000 (0:00:00.202) 0:00:37.871 ********* 2025-03-11 00:00:43.979480 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:43.979580 | orchestrator | 2025-03-11 00:00:43.980313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:43.981382 | orchestrator | Tuesday 11 March 2025 00:00:43 +0000 (0:00:00.229) 0:00:38.100 ********* 2025-03-11 00:00:44.182471 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:44.183584 | orchestrator | 2025-03-11 00:00:44.185626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:44.382863 | orchestrator | Tuesday 11 March 2025 00:00:44 +0000 (0:00:00.202) 0:00:38.302 ********* 2025-03-11 00:00:44.383000 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:44.384590 | orchestrator | 2025-03-11 00:00:44.387046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:44.706299 | orchestrator | Tuesday 11 March 2025 00:00:44 +0000 (0:00:00.201) 0:00:38.504 ********* 2025-03-11 00:00:44.706387 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:44.706418 | orchestrator | 2025-03-11 00:00:44.707043 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:44.707129 | orchestrator | Tuesday 11 March 2025 00:00:44 +0000 (0:00:00.323) 0:00:38.827 ********* 2025-03-11 00:00:45.135633 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89492e79-5deb-49f7-a1a7-185d4ce5c08c) 2025-03-11 00:00:45.136440 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89492e79-5deb-49f7-a1a7-185d4ce5c08c) 2025-03-11 00:00:45.138933 | orchestrator | 2025-03-11 00:00:45.139730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:45.140080 | orchestrator | Tuesday 11 March 2025 00:00:45 +0000 (0:00:00.429) 0:00:39.256 ********* 2025-03-11 00:00:45.585091 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a074a62-8b02-498d-8d1e-97a298b60d07) 2025-03-11 00:00:45.585684 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a074a62-8b02-498d-8d1e-97a298b60d07) 2025-03-11 00:00:45.585712 | orchestrator | 2025-03-11 00:00:45.586525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:45.587482 | orchestrator | Tuesday 11 March 2025 00:00:45 +0000 (0:00:00.449) 0:00:39.705 ********* 2025-03-11 00:00:46.046274 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7ad77790-9240-4bf7-8fbd-881e22f1e07b) 2025-03-11 00:00:46.046564 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7ad77790-9240-4bf7-8fbd-881e22f1e07b) 2025-03-11 00:00:46.047551 | orchestrator | 2025-03-11 00:00:46.050078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:46.050202 | orchestrator | Tuesday 11 March 2025 00:00:46 +0000 (0:00:00.459) 0:00:40.164 ********* 2025-03-11 00:00:46.615618 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ce862c5-1280-46ce-a44b-7fdf993418a7) 2025-03-11 00:00:46.615818 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ce862c5-1280-46ce-a44b-7fdf993418a7) 2025-03-11 00:00:46.616533 | orchestrator | 2025-03-11 00:00:46.616860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 00:00:46.617555 | orchestrator | Tuesday 11 March 2025 00:00:46 +0000 (0:00:00.570) 0:00:40.735 ********* 2025-03-11 00:00:46.986148 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 00:00:46.986323 | orchestrator | 2025-03-11 00:00:46.987157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:46.987825 | orchestrator | Tuesday 11 March 2025 00:00:46 +0000 (0:00:00.366) 0:00:41.102 ********* 2025-03-11 00:00:47.488291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-11 00:00:47.489124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-11 00:00:47.489187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-11 00:00:47.489268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-11 00:00:47.490208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-11 00:00:47.493525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-11 00:00:47.494525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-11 00:00:47.494865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-11 00:00:47.495195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-11 00:00:47.496031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-11 00:00:47.496135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-11 00:00:47.496196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-11 00:00:47.496547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-11 00:00:47.496923 | orchestrator | 2025-03-11 00:00:47.497115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:47.497528 | orchestrator | Tuesday 11 March 2025 00:00:47 +0000 (0:00:00.505) 0:00:41.608 ********* 2025-03-11 00:00:47.686760 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:47.687854 | orchestrator | 2025-03-11 00:00:47.687887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:47.687908 | orchestrator | Tuesday 11 March 2025 00:00:47 +0000 (0:00:00.199) 0:00:41.807 ********* 2025-03-11 00:00:47.918141 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:47.918498 | orchestrator | 2025-03-11 00:00:47.918915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:47.919808 | orchestrator | Tuesday 11 March 2025 00:00:47 +0000 (0:00:00.229) 0:00:42.037 ********* 2025-03-11 00:00:48.145720 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:48.748759 | orchestrator | 2025-03-11 00:00:48.748855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:48.748873 | orchestrator | Tuesday 11 March 2025 00:00:48 +0000 (0:00:00.226) 0:00:42.263 ********* 2025-03-11 00:00:48.748901 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:48.748963 | orchestrator | 2025-03-11 00:00:48.749121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:48.749534 | orchestrator | Tuesday 11 March 2025 00:00:48 +0000 (0:00:00.606) 0:00:42.869 ********* 2025-03-11 00:00:48.980906 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:48.981572 | orchestrator | 2025-03-11 00:00:48.981612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:48.981938 | orchestrator | Tuesday 11 March 2025 00:00:48 +0000 (0:00:00.229) 0:00:43.099 ********* 2025-03-11 00:00:49.193758 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:49.195195 | orchestrator | 2025-03-11 00:00:49.195888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:49.198819 | orchestrator | Tuesday 11 March 2025 00:00:49 +0000 (0:00:00.214) 0:00:43.314 ********* 2025-03-11 00:00:49.407634 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:49.409427 | orchestrator | 2025-03-11 00:00:49.410275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:49.411142 | orchestrator | Tuesday 11 March 2025 00:00:49 +0000 (0:00:00.213) 0:00:43.527 ********* 2025-03-11 00:00:49.622135 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:49.622642 | orchestrator | 2025-03-11 00:00:49.622677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:49.623074 | orchestrator | Tuesday 11 March 2025 00:00:49 +0000 (0:00:00.214) 0:00:43.741 ********* 2025-03-11 00:00:50.353139 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-11 00:00:50.357481 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-11 00:00:50.357779 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-11 00:00:50.357833 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-11 00:00:50.357854 | orchestrator | 2025-03-11 00:00:50.359599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:50.362878 | orchestrator | Tuesday 11 March 2025 00:00:50 +0000 (0:00:00.729) 0:00:44.471 ********* 2025-03-11 00:00:50.577640 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:50.579177 | orchestrator | 2025-03-11 00:00:50.580765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:50.584131 | orchestrator | Tuesday 11 March 2025 00:00:50 +0000 (0:00:00.227) 0:00:44.698 ********* 2025-03-11 00:00:50.821384 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:50.821579 | orchestrator | 2025-03-11 00:00:50.822377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:50.822748 | orchestrator | Tuesday 11 March 2025 00:00:50 +0000 (0:00:00.241) 0:00:44.940 ********* 2025-03-11 00:00:51.104663 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:51.105628 | orchestrator | 2025-03-11 00:00:51.106163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 00:00:51.107102 | orchestrator | Tuesday 11 March 2025 00:00:51 +0000 (0:00:00.282) 0:00:45.222 ********* 2025-03-11 00:00:51.322163 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:51.322340 | orchestrator | 2025-03-11 00:00:51.328813 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-11 00:00:51.751612 | orchestrator | Tuesday 11 March 2025 00:00:51 +0000 (0:00:00.218) 0:00:45.441 ********* 2025-03-11 00:00:51.751722 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-03-11 00:00:51.751782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-03-11 00:00:51.752649 | orchestrator | 2025-03-11 00:00:51.753271 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-11 00:00:51.753610 | orchestrator | Tuesday 11 March 2025 00:00:51 +0000 (0:00:00.431) 0:00:45.872 ********* 2025-03-11 00:00:51.905787 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:51.906363 | orchestrator | 2025-03-11 00:00:51.907151 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-11 00:00:51.907372 | orchestrator | Tuesday 11 March 2025 00:00:51 +0000 (0:00:00.152) 0:00:46.025 ********* 2025-03-11 00:00:52.066237 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:52.067521 | orchestrator | 2025-03-11 00:00:52.068182 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-11 00:00:52.069333 | orchestrator | Tuesday 11 March 2025 00:00:52 +0000 (0:00:00.162) 0:00:46.187 ********* 2025-03-11 00:00:52.235712 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:52.236426 | orchestrator | 2025-03-11 00:00:52.236633 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-11 00:00:52.237344 | orchestrator | Tuesday 11 March 2025 00:00:52 +0000 (0:00:00.168) 0:00:46.356 ********* 2025-03-11 00:00:52.389310 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:00:52.389818 | orchestrator | 2025-03-11 00:00:52.391168 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-11 00:00:52.392084 | orchestrator | Tuesday 11 March 2025 00:00:52 +0000 (0:00:00.152) 0:00:46.508 ********* 2025-03-11 00:00:52.609728 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02208e85-9f55-5326-ae50-42694fdfd5d1'}}) 2025-03-11 00:00:52.610173 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}}) 2025-03-11 00:00:52.611302 | orchestrator | 2025-03-11 00:00:52.611855 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-11 00:00:52.614556 | orchestrator | Tuesday 11 March 2025 00:00:52 +0000 (0:00:00.221) 0:00:46.730 ********* 2025-03-11 00:00:52.782083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02208e85-9f55-5326-ae50-42694fdfd5d1'}})  2025-03-11 00:00:52.783143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}})  2025-03-11 00:00:52.783813 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:52.784548 | orchestrator | 2025-03-11 00:00:52.785353 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-11 00:00:52.785808 | orchestrator | Tuesday 11 March 2025 00:00:52 +0000 (0:00:00.172) 0:00:46.902 ********* 2025-03-11 00:00:52.976524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02208e85-9f55-5326-ae50-42694fdfd5d1'}})  2025-03-11 00:00:52.977381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}})  2025-03-11 00:00:52.978107 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:52.978147 | orchestrator | 2025-03-11 00:00:52.979590 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-11 00:00:53.157856 | orchestrator | Tuesday 11 March 2025 00:00:52 +0000 (0:00:00.191) 0:00:47.094 ********* 2025-03-11 00:00:53.157935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02208e85-9f55-5326-ae50-42694fdfd5d1'}})  2025-03-11 00:00:53.158724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}})  2025-03-11 00:00:53.159438 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:53.160043 | orchestrator | 2025-03-11 00:00:53.161538 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-11 00:00:53.162252 | orchestrator | Tuesday 11 March 2025 00:00:53 +0000 (0:00:00.183) 0:00:47.278 ********* 2025-03-11 00:00:53.311213 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:00:53.312276 | orchestrator | 2025-03-11 00:00:53.312303 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-11 00:00:53.312324 | orchestrator | Tuesday 11 March 2025 00:00:53 +0000 (0:00:00.152) 0:00:47.430 ********* 2025-03-11 00:00:53.477033 | orchestrator | ok: [testbed-node-5] 2025-03-11 00:00:53.477161 | orchestrator | 2025-03-11 00:00:53.478309 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-11 00:00:53.479433 | orchestrator | Tuesday 11 March 2025 00:00:53 +0000 (0:00:00.167) 0:00:47.597 ********* 2025-03-11 00:00:53.661610 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:53.662697 | orchestrator | 2025-03-11 00:00:53.664118 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-11 00:00:53.664166 | orchestrator | Tuesday 11 March 2025 00:00:53 +0000 (0:00:00.182) 0:00:47.780 ********* 2025-03-11 00:00:54.060585 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:54.061047 | orchestrator | 2025-03-11 00:00:54.061376 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-11 00:00:54.062686 | orchestrator | Tuesday 11 March 2025 00:00:54 +0000 (0:00:00.399) 0:00:48.180 ********* 2025-03-11 00:00:54.216940 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:54.218184 | orchestrator | 2025-03-11 00:00:54.218917 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-11 00:00:54.222293 | orchestrator | Tuesday 11 March 2025 00:00:54 +0000 (0:00:00.152) 0:00:48.332 ********* 2025-03-11 00:00:54.366188 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 00:00:54.367052 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:00:54.368183 | orchestrator |  "sdb": { 2025-03-11 00:00:54.368883 | orchestrator |  "osd_lvm_uuid": "02208e85-9f55-5326-ae50-42694fdfd5d1" 2025-03-11 00:00:54.369962 | orchestrator |  }, 2025-03-11 00:00:54.370575 | orchestrator |  "sdc": { 2025-03-11 00:00:54.371297 | orchestrator |  "osd_lvm_uuid": "3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1" 2025-03-11 00:00:54.371952 | orchestrator |  } 2025-03-11 00:00:54.372658 | orchestrator |  } 2025-03-11 00:00:54.373196 | orchestrator | } 2025-03-11 00:00:54.373572 | orchestrator | 2025-03-11 00:00:54.374210 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-11 00:00:54.374523 | orchestrator | Tuesday 11 March 2025 00:00:54 +0000 (0:00:00.152) 0:00:48.485 ********* 2025-03-11 00:00:54.509329 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:54.510456 | orchestrator | 2025-03-11 00:00:54.511461 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-11 00:00:54.512475 | orchestrator | Tuesday 11 March 2025 00:00:54 +0000 (0:00:00.142) 0:00:48.628 ********* 2025-03-11 00:00:54.685312 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:54.685681 | orchestrator | 2025-03-11 00:00:54.685760 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-11 00:00:54.686492 | orchestrator | Tuesday 11 March 2025 00:00:54 +0000 (0:00:00.178) 0:00:48.806 ********* 2025-03-11 00:00:54.839803 | orchestrator | skipping: [testbed-node-5] 2025-03-11 00:00:54.840746 | orchestrator | 2025-03-11 00:00:54.841474 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-11 00:00:54.841882 | orchestrator | Tuesday 11 March 2025 00:00:54 +0000 (0:00:00.154) 0:00:48.960 ********* 2025-03-11 00:00:55.145103 | orchestrator | changed: [testbed-node-5] => { 2025-03-11 00:00:55.145708 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-11 00:00:55.147780 | orchestrator |  "ceph_osd_devices": { 2025-03-11 00:00:55.148232 | orchestrator |  "sdb": { 2025-03-11 00:00:55.148264 | orchestrator |  "osd_lvm_uuid": "02208e85-9f55-5326-ae50-42694fdfd5d1" 2025-03-11 00:00:55.149626 | orchestrator |  }, 2025-03-11 00:00:55.150488 | orchestrator |  "sdc": { 2025-03-11 00:00:55.150714 | orchestrator |  "osd_lvm_uuid": "3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1" 2025-03-11 00:00:55.151512 | orchestrator |  } 2025-03-11 00:00:55.152042 | orchestrator |  }, 2025-03-11 00:00:55.152920 | orchestrator |  "lvm_volumes": [ 2025-03-11 00:00:55.153141 | orchestrator |  { 2025-03-11 00:00:55.153827 | orchestrator |  "data": "osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1", 2025-03-11 00:00:55.154139 | orchestrator |  "data_vg": "ceph-02208e85-9f55-5326-ae50-42694fdfd5d1" 2025-03-11 00:00:55.155240 | orchestrator |  }, 2025-03-11 00:00:55.155506 | orchestrator |  { 2025-03-11 00:00:55.155537 | orchestrator |  "data": "osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1", 2025-03-11 00:00:55.155907 | orchestrator |  "data_vg": "ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1" 2025-03-11 00:00:55.156257 | orchestrator |  } 2025-03-11 00:00:55.156632 | orchestrator |  ] 2025-03-11 00:00:55.157149 | orchestrator |  } 2025-03-11 00:00:55.157588 | orchestrator | } 2025-03-11 00:00:55.157947 | orchestrator | 2025-03-11 00:00:55.158332 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-11 00:00:55.158772 | orchestrator | Tuesday 11 March 2025 00:00:55 +0000 (0:00:00.303) 0:00:49.263 ********* 2025-03-11 00:00:56.538485 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-11 00:00:56.541227 | orchestrator | 2025-03-11 00:00:56.542821 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 00:00:56.542871 | orchestrator | 2025-03-11 00:00:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 00:00:56.544010 | orchestrator | 2025-03-11 00:00:56 | INFO  | Please wait and do not abort execution. 2025-03-11 00:00:56.544045 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-11 00:00:56.544522 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-11 00:00:56.544551 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-11 00:00:56.545470 | orchestrator | 2025-03-11 00:00:56.546354 | orchestrator | 2025-03-11 00:00:56.548754 | orchestrator | 2025-03-11 00:00:56.549727 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 00:00:56.550417 | orchestrator | Tuesday 11 March 2025 00:00:56 +0000 (0:00:01.391) 0:00:50.655 ********* 2025-03-11 00:00:56.551115 | orchestrator | =============================================================================== 2025-03-11 00:00:56.551451 | orchestrator | Write configuration file ------------------------------------------------ 5.41s 2025-03-11 00:00:56.552175 | orchestrator | Add known partitions to the list of available block devices ------------- 1.52s 2025-03-11 00:00:56.552895 | orchestrator | Add known links to the list of available block devices ------------------ 1.50s 2025-03-11 00:00:56.553420 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.16s 2025-03-11 00:00:56.553628 | orchestrator | Print configuration data ------------------------------------------------ 1.15s 2025-03-11 00:00:56.554149 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-03-11 00:00:56.555024 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-03-11 00:00:56.555120 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.97s 2025-03-11 00:00:56.555377 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2025-03-11 00:00:56.555862 | orchestrator | Get initial list of available block devices ----------------------------- 0.87s 2025-03-11 00:00:56.556313 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.87s 2025-03-11 00:00:56.558112 | orchestrator | Set WAL devices config data --------------------------------------------- 0.86s 2025-03-11 00:00:56.558544 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-03-11 00:00:56.558866 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.81s 2025-03-11 00:00:56.559418 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-03-11 00:00:56.560270 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-03-11 00:00:56.563630 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-03-11 00:00:56.564440 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-03-11 00:00:56.565041 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.69s 2025-03-11 00:00:56.565464 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2025-03-11 00:01:09.124830 | orchestrator | 2025-03-11 00:01:09 | INFO  | Task e66a64b8-c615-4a96-9e5e-0edff3419b6d is running in background. Output coming soon. 2025-03-11 01:01:11.595212 | orchestrator | 2025-03-11 01:01:11 | INFO  | Task 16cc97fb-7aed-4f5d-b869-25fc18f865a7 (ceph-create-lvm-devices) was prepared for execution. 2025-03-11 01:01:15.252991 | orchestrator | 2025-03-11 01:01:11 | INFO  | It takes a moment until task 16cc97fb-7aed-4f5d-b869-25fc18f865a7 (ceph-create-lvm-devices) has been started and output is visible here. 2025-03-11 01:01:15.253128 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-11 01:01:15.855393 | orchestrator | 2025-03-11 01:01:15.857143 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-11 01:01:15.857971 | orchestrator | 2025-03-11 01:01:15.858852 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 01:01:15.860180 | orchestrator | Tuesday 11 March 2025 01:01:15 +0000 (0:00:00.522) 0:00:00.522 ********* 2025-03-11 01:01:16.132356 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-11 01:01:16.132714 | orchestrator | 2025-03-11 01:01:16.133602 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 01:01:16.133997 | orchestrator | Tuesday 11 March 2025 01:01:16 +0000 (0:00:00.278) 0:00:00.800 ********* 2025-03-11 01:01:16.371011 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:16.371509 | orchestrator | 2025-03-11 01:01:16.373851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:17.177742 | orchestrator | Tuesday 11 March 2025 01:01:16 +0000 (0:00:00.237) 0:00:01.037 ********* 2025-03-11 01:01:17.177866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-11 01:01:17.177931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-11 01:01:17.177952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-11 01:01:17.178648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-11 01:01:17.179291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-11 01:01:17.181350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-11 01:01:17.183235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-11 01:01:17.184833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-11 01:01:17.186158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-11 01:01:17.186596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-11 01:01:17.188022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-11 01:01:17.188487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-11 01:01:17.190192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-11 01:01:17.396427 | orchestrator | 2025-03-11 01:01:17.396506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:17.396522 | orchestrator | Tuesday 11 March 2025 01:01:17 +0000 (0:00:00.808) 0:00:01.846 ********* 2025-03-11 01:01:17.396545 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:17.396984 | orchestrator | 2025-03-11 01:01:17.397659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:17.397989 | orchestrator | Tuesday 11 March 2025 01:01:17 +0000 (0:00:00.218) 0:00:02.064 ********* 2025-03-11 01:01:17.670654 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:17.671206 | orchestrator | 2025-03-11 01:01:17.671244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:17.671738 | orchestrator | Tuesday 11 March 2025 01:01:17 +0000 (0:00:00.274) 0:00:02.339 ********* 2025-03-11 01:01:17.892141 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:17.892297 | orchestrator | 2025-03-11 01:01:17.893832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:17.894775 | orchestrator | Tuesday 11 March 2025 01:01:17 +0000 (0:00:00.221) 0:00:02.560 ********* 2025-03-11 01:01:18.111907 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:18.112048 | orchestrator | 2025-03-11 01:01:18.112614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:18.113367 | orchestrator | Tuesday 11 March 2025 01:01:18 +0000 (0:00:00.220) 0:00:02.780 ********* 2025-03-11 01:01:18.366419 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:18.366623 | orchestrator | 2025-03-11 01:01:18.367237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:18.367930 | orchestrator | Tuesday 11 March 2025 01:01:18 +0000 (0:00:00.253) 0:00:03.034 ********* 2025-03-11 01:01:18.569150 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:18.570368 | orchestrator | 2025-03-11 01:01:18.571826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:18.572838 | orchestrator | Tuesday 11 March 2025 01:01:18 +0000 (0:00:00.203) 0:00:03.237 ********* 2025-03-11 01:01:18.794477 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:18.794673 | orchestrator | 2025-03-11 01:01:18.795812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:18.796171 | orchestrator | Tuesday 11 March 2025 01:01:18 +0000 (0:00:00.225) 0:00:03.463 ********* 2025-03-11 01:01:19.015599 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:19.017911 | orchestrator | 2025-03-11 01:01:19.020693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:19.944946 | orchestrator | Tuesday 11 March 2025 01:01:19 +0000 (0:00:00.220) 0:00:03.684 ********* 2025-03-11 01:01:19.945061 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d67a3c24-b729-45e0-8397-e020ae3d0e20) 2025-03-11 01:01:19.945141 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d67a3c24-b729-45e0-8397-e020ae3d0e20) 2025-03-11 01:01:19.945163 | orchestrator | 2025-03-11 01:01:19.946579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:19.947025 | orchestrator | Tuesday 11 March 2025 01:01:19 +0000 (0:00:00.926) 0:00:04.610 ********* 2025-03-11 01:01:20.412884 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e87dbd84-35d6-4b7e-85dc-79bdef85b968) 2025-03-11 01:01:20.413623 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e87dbd84-35d6-4b7e-85dc-79bdef85b968) 2025-03-11 01:01:20.413664 | orchestrator | 2025-03-11 01:01:20.414109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:20.415641 | orchestrator | Tuesday 11 March 2025 01:01:20 +0000 (0:00:00.469) 0:00:05.079 ********* 2025-03-11 01:01:20.900630 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f8c648b-56e1-45e0-bc37-3aa283872edf) 2025-03-11 01:01:20.901707 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f8c648b-56e1-45e0-bc37-3aa283872edf) 2025-03-11 01:01:20.902741 | orchestrator | 2025-03-11 01:01:20.903273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:20.904177 | orchestrator | Tuesday 11 March 2025 01:01:20 +0000 (0:00:00.489) 0:00:05.568 ********* 2025-03-11 01:01:21.408341 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3a54aab5-53b8-4264-89d4-baa19ed5d083) 2025-03-11 01:01:21.408975 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3a54aab5-53b8-4264-89d4-baa19ed5d083) 2025-03-11 01:01:21.409060 | orchestrator | 2025-03-11 01:01:21.409535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:21.410096 | orchestrator | Tuesday 11 March 2025 01:01:21 +0000 (0:00:00.507) 0:00:06.075 ********* 2025-03-11 01:01:21.792164 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 01:01:21.792365 | orchestrator | 2025-03-11 01:01:21.792398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:21.792497 | orchestrator | Tuesday 11 March 2025 01:01:21 +0000 (0:00:00.384) 0:00:06.460 ********* 2025-03-11 01:01:22.353921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-11 01:01:22.355168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-11 01:01:22.356416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-11 01:01:22.358088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-11 01:01:22.359859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-11 01:01:22.360990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-11 01:01:22.361957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-11 01:01:22.363385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-11 01:01:22.363963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-11 01:01:22.364960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-11 01:01:22.366111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-11 01:01:22.366543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-11 01:01:22.366881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-11 01:01:22.367236 | orchestrator | 2025-03-11 01:01:22.367879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:22.368269 | orchestrator | Tuesday 11 March 2025 01:01:22 +0000 (0:00:00.561) 0:00:07.021 ********* 2025-03-11 01:01:22.577493 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:22.577810 | orchestrator | 2025-03-11 01:01:22.578684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:22.578785 | orchestrator | Tuesday 11 March 2025 01:01:22 +0000 (0:00:00.224) 0:00:07.245 ********* 2025-03-11 01:01:22.798238 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:22.799045 | orchestrator | 2025-03-11 01:01:22.800697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:22.803505 | orchestrator | Tuesday 11 March 2025 01:01:22 +0000 (0:00:00.221) 0:00:07.466 ********* 2025-03-11 01:01:23.012925 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:23.013392 | orchestrator | 2025-03-11 01:01:23.013671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:23.014155 | orchestrator | Tuesday 11 March 2025 01:01:23 +0000 (0:00:00.215) 0:00:07.682 ********* 2025-03-11 01:01:23.211108 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:23.211589 | orchestrator | 2025-03-11 01:01:23.211855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:23.212845 | orchestrator | Tuesday 11 March 2025 01:01:23 +0000 (0:00:00.197) 0:00:07.879 ********* 2025-03-11 01:01:23.846532 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:23.846678 | orchestrator | 2025-03-11 01:01:23.846983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:23.847240 | orchestrator | Tuesday 11 March 2025 01:01:23 +0000 (0:00:00.634) 0:00:08.514 ********* 2025-03-11 01:01:24.059940 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:24.060170 | orchestrator | 2025-03-11 01:01:24.060931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:24.061682 | orchestrator | Tuesday 11 March 2025 01:01:24 +0000 (0:00:00.214) 0:00:08.728 ********* 2025-03-11 01:01:24.274592 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:24.274702 | orchestrator | 2025-03-11 01:01:24.275561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:24.276204 | orchestrator | Tuesday 11 March 2025 01:01:24 +0000 (0:00:00.214) 0:00:08.943 ********* 2025-03-11 01:01:24.499418 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:24.499921 | orchestrator | 2025-03-11 01:01:24.500141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:24.500230 | orchestrator | Tuesday 11 March 2025 01:01:24 +0000 (0:00:00.222) 0:00:09.166 ********* 2025-03-11 01:01:25.197923 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-11 01:01:25.199148 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-11 01:01:25.202892 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-11 01:01:25.203112 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-11 01:01:25.203141 | orchestrator | 2025-03-11 01:01:25.203162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:25.203842 | orchestrator | Tuesday 11 March 2025 01:01:25 +0000 (0:00:00.695) 0:00:09.862 ********* 2025-03-11 01:01:25.429920 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:25.430167 | orchestrator | 2025-03-11 01:01:25.430203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:25.430930 | orchestrator | Tuesday 11 March 2025 01:01:25 +0000 (0:00:00.234) 0:00:10.096 ********* 2025-03-11 01:01:25.659149 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:25.660007 | orchestrator | 2025-03-11 01:01:25.660847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:25.663685 | orchestrator | Tuesday 11 March 2025 01:01:25 +0000 (0:00:00.227) 0:00:10.324 ********* 2025-03-11 01:01:25.862876 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:25.863320 | orchestrator | 2025-03-11 01:01:25.863357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:25.864227 | orchestrator | Tuesday 11 March 2025 01:01:25 +0000 (0:00:00.204) 0:00:10.528 ********* 2025-03-11 01:01:26.101400 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:26.102595 | orchestrator | 2025-03-11 01:01:26.102663 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-11 01:01:26.251152 | orchestrator | Tuesday 11 March 2025 01:01:26 +0000 (0:00:00.239) 0:00:10.768 ********* 2025-03-11 01:01:26.251261 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:26.251830 | orchestrator | 2025-03-11 01:01:26.252563 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-11 01:01:26.253682 | orchestrator | Tuesday 11 March 2025 01:01:26 +0000 (0:00:00.151) 0:00:10.919 ********* 2025-03-11 01:01:26.751151 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}}) 2025-03-11 01:01:26.751331 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '25356c17-92f4-5cd4-84bc-9f6437381575'}}) 2025-03-11 01:01:26.751361 | orchestrator | 2025-03-11 01:01:26.751926 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-11 01:01:26.751959 | orchestrator | Tuesday 11 March 2025 01:01:26 +0000 (0:00:00.499) 0:00:11.418 ********* 2025-03-11 01:01:29.249218 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}) 2025-03-11 01:01:29.249409 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'}) 2025-03-11 01:01:29.249437 | orchestrator | 2025-03-11 01:01:29.249486 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-11 01:01:29.249571 | orchestrator | Tuesday 11 March 2025 01:01:29 +0000 (0:00:02.496) 0:00:13.915 ********* 2025-03-11 01:01:29.435792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:29.436570 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:29.436931 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:29.437674 | orchestrator | 2025-03-11 01:01:29.438806 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-11 01:01:29.440915 | orchestrator | Tuesday 11 March 2025 01:01:29 +0000 (0:00:00.189) 0:00:14.104 ********* 2025-03-11 01:01:30.952857 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}) 2025-03-11 01:01:30.954268 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'}) 2025-03-11 01:01:30.954345 | orchestrator | 2025-03-11 01:01:30.957632 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-11 01:01:30.958312 | orchestrator | Tuesday 11 March 2025 01:01:30 +0000 (0:00:01.513) 0:00:15.618 ********* 2025-03-11 01:01:31.124867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:31.125079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:31.125596 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:31.126544 | orchestrator | 2025-03-11 01:01:31.130132 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-11 01:01:31.272800 | orchestrator | Tuesday 11 March 2025 01:01:31 +0000 (0:00:00.175) 0:00:15.793 ********* 2025-03-11 01:01:31.272919 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:31.273161 | orchestrator | 2025-03-11 01:01:31.274109 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-11 01:01:31.274384 | orchestrator | Tuesday 11 March 2025 01:01:31 +0000 (0:00:00.146) 0:00:15.940 ********* 2025-03-11 01:01:31.447998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:31.448176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:31.448203 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:31.448765 | orchestrator | 2025-03-11 01:01:31.449137 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-11 01:01:31.449579 | orchestrator | Tuesday 11 March 2025 01:01:31 +0000 (0:00:00.176) 0:00:16.116 ********* 2025-03-11 01:01:31.589226 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:31.589401 | orchestrator | 2025-03-11 01:01:31.589583 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-11 01:01:31.589931 | orchestrator | Tuesday 11 March 2025 01:01:31 +0000 (0:00:00.141) 0:00:16.258 ********* 2025-03-11 01:01:31.777512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:31.778065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:31.778271 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:31.779748 | orchestrator | 2025-03-11 01:01:31.780663 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-11 01:01:31.781265 | orchestrator | Tuesday 11 March 2025 01:01:31 +0000 (0:00:00.187) 0:00:16.446 ********* 2025-03-11 01:01:32.138808 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:32.139795 | orchestrator | 2025-03-11 01:01:32.142723 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-11 01:01:32.346496 | orchestrator | Tuesday 11 March 2025 01:01:32 +0000 (0:00:00.360) 0:00:16.806 ********* 2025-03-11 01:01:32.346601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:32.347282 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:32.351161 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:32.351497 | orchestrator | 2025-03-11 01:01:32.352345 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-11 01:01:32.353248 | orchestrator | Tuesday 11 March 2025 01:01:32 +0000 (0:00:00.206) 0:00:17.013 ********* 2025-03-11 01:01:32.493088 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:32.494119 | orchestrator | 2025-03-11 01:01:32.494191 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-11 01:01:32.495026 | orchestrator | Tuesday 11 March 2025 01:01:32 +0000 (0:00:00.147) 0:00:17.160 ********* 2025-03-11 01:01:32.675836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:32.675921 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:32.676528 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:32.677531 | orchestrator | 2025-03-11 01:01:32.678312 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-11 01:01:32.678637 | orchestrator | Tuesday 11 March 2025 01:01:32 +0000 (0:00:00.183) 0:00:17.344 ********* 2025-03-11 01:01:32.855193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:32.855809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:32.855841 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:32.856236 | orchestrator | 2025-03-11 01:01:32.856772 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-11 01:01:32.857447 | orchestrator | Tuesday 11 March 2025 01:01:32 +0000 (0:00:00.179) 0:00:17.523 ********* 2025-03-11 01:01:33.040288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:33.040420 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:33.040973 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:33.041555 | orchestrator | 2025-03-11 01:01:33.044227 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-11 01:01:33.045342 | orchestrator | Tuesday 11 March 2025 01:01:33 +0000 (0:00:00.183) 0:00:17.707 ********* 2025-03-11 01:01:33.196733 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:33.197275 | orchestrator | 2025-03-11 01:01:33.198104 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-11 01:01:33.199419 | orchestrator | Tuesday 11 March 2025 01:01:33 +0000 (0:00:00.150) 0:00:17.858 ********* 2025-03-11 01:01:33.350096 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:33.351218 | orchestrator | 2025-03-11 01:01:33.352930 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-11 01:01:33.353729 | orchestrator | Tuesday 11 March 2025 01:01:33 +0000 (0:00:00.158) 0:00:18.017 ********* 2025-03-11 01:01:33.498725 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:33.498876 | orchestrator | 2025-03-11 01:01:33.499793 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-11 01:01:33.500388 | orchestrator | Tuesday 11 March 2025 01:01:33 +0000 (0:00:00.149) 0:00:18.166 ********* 2025-03-11 01:01:33.666448 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:01:33.666691 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-11 01:01:33.666719 | orchestrator | } 2025-03-11 01:01:33.666735 | orchestrator | 2025-03-11 01:01:33.666755 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-11 01:01:33.667259 | orchestrator | Tuesday 11 March 2025 01:01:33 +0000 (0:00:00.167) 0:00:18.333 ********* 2025-03-11 01:01:33.824017 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:01:33.824769 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-11 01:01:33.825153 | orchestrator | } 2025-03-11 01:01:33.826623 | orchestrator | 2025-03-11 01:01:33.826871 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-11 01:01:33.827573 | orchestrator | Tuesday 11 March 2025 01:01:33 +0000 (0:00:00.159) 0:00:18.493 ********* 2025-03-11 01:01:33.984380 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:01:33.985374 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-11 01:01:33.986798 | orchestrator | } 2025-03-11 01:01:33.987909 | orchestrator | 2025-03-11 01:01:33.988193 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-11 01:01:33.989358 | orchestrator | Tuesday 11 March 2025 01:01:33 +0000 (0:00:00.159) 0:00:18.652 ********* 2025-03-11 01:01:35.041661 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:35.041836 | orchestrator | 2025-03-11 01:01:35.042916 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-11 01:01:35.044586 | orchestrator | Tuesday 11 March 2025 01:01:35 +0000 (0:00:01.056) 0:00:19.708 ********* 2025-03-11 01:01:35.593921 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:35.594155 | orchestrator | 2025-03-11 01:01:35.594640 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-11 01:01:35.594673 | orchestrator | Tuesday 11 March 2025 01:01:35 +0000 (0:00:00.553) 0:00:20.261 ********* 2025-03-11 01:01:36.165619 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:36.166256 | orchestrator | 2025-03-11 01:01:36.167303 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-11 01:01:36.168260 | orchestrator | Tuesday 11 March 2025 01:01:36 +0000 (0:00:00.572) 0:00:20.834 ********* 2025-03-11 01:01:36.329435 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:36.331405 | orchestrator | 2025-03-11 01:01:36.332215 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-11 01:01:36.333755 | orchestrator | Tuesday 11 March 2025 01:01:36 +0000 (0:00:00.162) 0:00:20.996 ********* 2025-03-11 01:01:36.485249 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:36.486631 | orchestrator | 2025-03-11 01:01:36.489161 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-11 01:01:36.606001 | orchestrator | Tuesday 11 March 2025 01:01:36 +0000 (0:00:00.156) 0:00:21.152 ********* 2025-03-11 01:01:36.606135 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:36.607627 | orchestrator | 2025-03-11 01:01:36.608450 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-11 01:01:36.611890 | orchestrator | Tuesday 11 March 2025 01:01:36 +0000 (0:00:00.121) 0:00:21.273 ********* 2025-03-11 01:01:36.762328 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:01:36.762671 | orchestrator |  "vgs_report": { 2025-03-11 01:01:36.763408 | orchestrator |  "vg": [] 2025-03-11 01:01:36.763557 | orchestrator |  } 2025-03-11 01:01:36.765185 | orchestrator | } 2025-03-11 01:01:36.919993 | orchestrator | 2025-03-11 01:01:36.920100 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-11 01:01:36.920118 | orchestrator | Tuesday 11 March 2025 01:01:36 +0000 (0:00:00.157) 0:00:21.431 ********* 2025-03-11 01:01:36.920148 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:36.921541 | orchestrator | 2025-03-11 01:01:36.922334 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-11 01:01:36.923185 | orchestrator | Tuesday 11 March 2025 01:01:36 +0000 (0:00:00.155) 0:00:21.586 ********* 2025-03-11 01:01:37.086137 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:37.086847 | orchestrator | 2025-03-11 01:01:37.087811 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-11 01:01:37.088454 | orchestrator | Tuesday 11 March 2025 01:01:37 +0000 (0:00:00.167) 0:00:21.754 ********* 2025-03-11 01:01:37.221185 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:37.225777 | orchestrator | 2025-03-11 01:01:37.372945 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-11 01:01:37.373012 | orchestrator | Tuesday 11 March 2025 01:01:37 +0000 (0:00:00.135) 0:00:21.889 ********* 2025-03-11 01:01:37.373038 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:37.374296 | orchestrator | 2025-03-11 01:01:37.374610 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-11 01:01:37.374638 | orchestrator | Tuesday 11 March 2025 01:01:37 +0000 (0:00:00.152) 0:00:22.042 ********* 2025-03-11 01:01:37.723111 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:37.723967 | orchestrator | 2025-03-11 01:01:37.881347 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-11 01:01:37.881434 | orchestrator | Tuesday 11 March 2025 01:01:37 +0000 (0:00:00.349) 0:00:22.392 ********* 2025-03-11 01:01:37.881514 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:37.881931 | orchestrator | 2025-03-11 01:01:37.882131 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-11 01:01:37.882243 | orchestrator | Tuesday 11 March 2025 01:01:37 +0000 (0:00:00.157) 0:00:22.549 ********* 2025-03-11 01:01:38.035538 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:38.035671 | orchestrator | 2025-03-11 01:01:38.036282 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-11 01:01:38.036753 | orchestrator | Tuesday 11 March 2025 01:01:38 +0000 (0:00:00.154) 0:00:22.704 ********* 2025-03-11 01:01:38.183267 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:38.183380 | orchestrator | 2025-03-11 01:01:38.184246 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-11 01:01:38.187764 | orchestrator | Tuesday 11 March 2025 01:01:38 +0000 (0:00:00.147) 0:00:22.851 ********* 2025-03-11 01:01:38.336031 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:38.336578 | orchestrator | 2025-03-11 01:01:38.339082 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-11 01:01:38.341663 | orchestrator | Tuesday 11 March 2025 01:01:38 +0000 (0:00:00.152) 0:00:23.004 ********* 2025-03-11 01:01:38.492307 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:38.492446 | orchestrator | 2025-03-11 01:01:38.493425 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-11 01:01:38.494104 | orchestrator | Tuesday 11 March 2025 01:01:38 +0000 (0:00:00.154) 0:00:23.158 ********* 2025-03-11 01:01:38.637286 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:38.637852 | orchestrator | 2025-03-11 01:01:38.638380 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-11 01:01:38.639604 | orchestrator | Tuesday 11 March 2025 01:01:38 +0000 (0:00:00.146) 0:00:23.305 ********* 2025-03-11 01:01:38.806967 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:38.809700 | orchestrator | 2025-03-11 01:01:38.811098 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-11 01:01:38.811132 | orchestrator | Tuesday 11 March 2025 01:01:38 +0000 (0:00:00.169) 0:00:23.475 ********* 2025-03-11 01:01:38.970581 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:38.970747 | orchestrator | 2025-03-11 01:01:38.971213 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-11 01:01:38.971897 | orchestrator | Tuesday 11 March 2025 01:01:38 +0000 (0:00:00.164) 0:00:23.640 ********* 2025-03-11 01:01:39.140339 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:39.142071 | orchestrator | 2025-03-11 01:01:39.142104 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-11 01:01:39.142127 | orchestrator | Tuesday 11 March 2025 01:01:39 +0000 (0:00:00.164) 0:00:23.804 ********* 2025-03-11 01:01:39.321156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:39.325533 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:39.327250 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:39.329256 | orchestrator | 2025-03-11 01:01:39.329825 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-11 01:01:39.330059 | orchestrator | Tuesday 11 March 2025 01:01:39 +0000 (0:00:00.184) 0:00:23.988 ********* 2025-03-11 01:01:39.493056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:39.493462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:39.493547 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:39.493628 | orchestrator | 2025-03-11 01:01:39.493972 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-11 01:01:39.494362 | orchestrator | Tuesday 11 March 2025 01:01:39 +0000 (0:00:00.171) 0:00:24.160 ********* 2025-03-11 01:01:39.898571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:39.899040 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:39.900774 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:39.901815 | orchestrator | 2025-03-11 01:01:39.902382 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-11 01:01:39.903612 | orchestrator | Tuesday 11 March 2025 01:01:39 +0000 (0:00:00.406) 0:00:24.567 ********* 2025-03-11 01:01:40.065804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:40.065928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:40.067424 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:40.067588 | orchestrator | 2025-03-11 01:01:40.068580 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-11 01:01:40.071068 | orchestrator | Tuesday 11 March 2025 01:01:40 +0000 (0:00:00.167) 0:00:24.734 ********* 2025-03-11 01:01:40.263896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:40.269620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:40.270319 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:40.270356 | orchestrator | 2025-03-11 01:01:40.271694 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-11 01:01:40.272123 | orchestrator | Tuesday 11 March 2025 01:01:40 +0000 (0:00:00.195) 0:00:24.930 ********* 2025-03-11 01:01:40.518968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:40.519651 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:40.520440 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:40.525579 | orchestrator | 2025-03-11 01:01:40.529712 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-11 01:01:40.532490 | orchestrator | Tuesday 11 March 2025 01:01:40 +0000 (0:00:00.253) 0:00:25.183 ********* 2025-03-11 01:01:40.710638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:40.711328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:40.712169 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:40.713135 | orchestrator | 2025-03-11 01:01:40.715120 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-11 01:01:40.716429 | orchestrator | Tuesday 11 March 2025 01:01:40 +0000 (0:00:00.195) 0:00:25.379 ********* 2025-03-11 01:01:40.913919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:40.914648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:40.915415 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:40.916607 | orchestrator | 2025-03-11 01:01:40.917832 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-11 01:01:40.918277 | orchestrator | Tuesday 11 March 2025 01:01:40 +0000 (0:00:00.202) 0:00:25.582 ********* 2025-03-11 01:01:41.502973 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:41.504092 | orchestrator | 2025-03-11 01:01:41.505846 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-11 01:01:42.104366 | orchestrator | Tuesday 11 March 2025 01:01:41 +0000 (0:00:00.587) 0:00:26.169 ********* 2025-03-11 01:01:42.104544 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:42.104636 | orchestrator | 2025-03-11 01:01:42.105137 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-11 01:01:42.105465 | orchestrator | Tuesday 11 March 2025 01:01:42 +0000 (0:00:00.601) 0:00:26.770 ********* 2025-03-11 01:01:42.279730 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:01:42.283116 | orchestrator | 2025-03-11 01:01:42.502597 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-11 01:01:42.502707 | orchestrator | Tuesday 11 March 2025 01:01:42 +0000 (0:00:00.174) 0:00:26.945 ********* 2025-03-11 01:01:42.502742 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'vg_name': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'}) 2025-03-11 01:01:42.504558 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'vg_name': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'}) 2025-03-11 01:01:42.508206 | orchestrator | 2025-03-11 01:01:42.509133 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-11 01:01:42.513880 | orchestrator | Tuesday 11 March 2025 01:01:42 +0000 (0:00:00.224) 0:00:27.169 ********* 2025-03-11 01:01:42.899388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:42.900977 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:42.902326 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:42.906330 | orchestrator | 2025-03-11 01:01:42.907605 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-11 01:01:42.908694 | orchestrator | Tuesday 11 March 2025 01:01:42 +0000 (0:00:00.397) 0:00:27.567 ********* 2025-03-11 01:01:43.094446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:43.095799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:43.096033 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:43.096818 | orchestrator | 2025-03-11 01:01:43.099946 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-11 01:01:43.101030 | orchestrator | Tuesday 11 March 2025 01:01:43 +0000 (0:00:00.195) 0:00:27.762 ********* 2025-03-11 01:01:43.297853 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b', 'data_vg': 'ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b'})  2025-03-11 01:01:43.299627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-25356c17-92f4-5cd4-84bc-9f6437381575', 'data_vg': 'ceph-25356c17-92f4-5cd4-84bc-9f6437381575'})  2025-03-11 01:01:43.300453 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:01:43.305021 | orchestrator | 2025-03-11 01:01:43.306161 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-11 01:01:43.306898 | orchestrator | Tuesday 11 March 2025 01:01:43 +0000 (0:00:00.203) 0:00:27.966 ********* 2025-03-11 01:01:44.036892 | orchestrator | ok: [testbed-node-3] => { 2025-03-11 01:01:44.037636 | orchestrator |  "lvm_report": { 2025-03-11 01:01:44.038309 | orchestrator |  "lv": [ 2025-03-11 01:01:44.039411 | orchestrator |  { 2025-03-11 01:01:44.040432 | orchestrator |  "lv_name": "osd-block-25356c17-92f4-5cd4-84bc-9f6437381575", 2025-03-11 01:01:44.042155 | orchestrator |  "vg_name": "ceph-25356c17-92f4-5cd4-84bc-9f6437381575" 2025-03-11 01:01:44.042876 | orchestrator |  }, 2025-03-11 01:01:44.043868 | orchestrator |  { 2025-03-11 01:01:44.044705 | orchestrator |  "lv_name": "osd-block-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b", 2025-03-11 01:01:44.045469 | orchestrator |  "vg_name": "ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b" 2025-03-11 01:01:44.046085 | orchestrator |  } 2025-03-11 01:01:44.046986 | orchestrator |  ], 2025-03-11 01:01:44.047947 | orchestrator |  "pv": [ 2025-03-11 01:01:44.048314 | orchestrator |  { 2025-03-11 01:01:44.049092 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-11 01:01:44.049802 | orchestrator |  "vg_name": "ceph-f72f4ade-bca7-59b7-8aa7-c340bc3ca60b" 2025-03-11 01:01:44.050553 | orchestrator |  }, 2025-03-11 01:01:44.050993 | orchestrator |  { 2025-03-11 01:01:44.051675 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-11 01:01:44.052237 | orchestrator |  "vg_name": "ceph-25356c17-92f4-5cd4-84bc-9f6437381575" 2025-03-11 01:01:44.053151 | orchestrator |  } 2025-03-11 01:01:44.054104 | orchestrator |  ] 2025-03-11 01:01:44.054530 | orchestrator |  } 2025-03-11 01:01:44.055201 | orchestrator | } 2025-03-11 01:01:44.056500 | orchestrator | 2025-03-11 01:01:44.057159 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-11 01:01:44.057704 | orchestrator | 2025-03-11 01:01:44.058665 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 01:01:44.059106 | orchestrator | Tuesday 11 March 2025 01:01:44 +0000 (0:00:00.737) 0:00:28.703 ********* 2025-03-11 01:01:44.726922 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-11 01:01:44.727173 | orchestrator | 2025-03-11 01:01:44.728394 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 01:01:44.729554 | orchestrator | Tuesday 11 March 2025 01:01:44 +0000 (0:00:00.689) 0:00:29.393 ********* 2025-03-11 01:01:44.978550 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:01:44.978945 | orchestrator | 2025-03-11 01:01:44.978978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:44.979002 | orchestrator | Tuesday 11 March 2025 01:01:44 +0000 (0:00:00.252) 0:00:29.645 ********* 2025-03-11 01:01:45.550225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-11 01:01:45.550705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-11 01:01:45.550737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-11 01:01:45.550767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-11 01:01:45.550783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-11 01:01:45.550804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-11 01:01:45.556922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-11 01:01:45.557015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-11 01:01:45.557579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-11 01:01:45.557649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-11 01:01:45.557826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-11 01:01:45.558464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-11 01:01:45.559508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-11 01:01:45.559831 | orchestrator | 2025-03-11 01:01:45.559858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:45.559957 | orchestrator | Tuesday 11 March 2025 01:01:45 +0000 (0:00:00.568) 0:00:30.214 ********* 2025-03-11 01:01:45.810295 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:45.810912 | orchestrator | 2025-03-11 01:01:45.811215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:45.812262 | orchestrator | Tuesday 11 March 2025 01:01:45 +0000 (0:00:00.263) 0:00:30.477 ********* 2025-03-11 01:01:46.058127 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:46.059470 | orchestrator | 2025-03-11 01:01:46.060846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:46.060880 | orchestrator | Tuesday 11 March 2025 01:01:46 +0000 (0:00:00.249) 0:00:30.726 ********* 2025-03-11 01:01:46.295086 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:46.295215 | orchestrator | 2025-03-11 01:01:46.295796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:46.296134 | orchestrator | Tuesday 11 March 2025 01:01:46 +0000 (0:00:00.237) 0:00:30.963 ********* 2025-03-11 01:01:46.561394 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:46.562505 | orchestrator | 2025-03-11 01:01:46.562588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:46.563307 | orchestrator | Tuesday 11 March 2025 01:01:46 +0000 (0:00:00.265) 0:00:31.229 ********* 2025-03-11 01:01:46.796409 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:46.797837 | orchestrator | 2025-03-11 01:01:46.798111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:46.801856 | orchestrator | Tuesday 11 March 2025 01:01:46 +0000 (0:00:00.234) 0:00:31.463 ********* 2025-03-11 01:01:47.055811 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:47.056321 | orchestrator | 2025-03-11 01:01:47.056712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:47.057361 | orchestrator | Tuesday 11 March 2025 01:01:47 +0000 (0:00:00.261) 0:00:31.724 ********* 2025-03-11 01:01:47.289193 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:47.291113 | orchestrator | 2025-03-11 01:01:47.295407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:47.296172 | orchestrator | Tuesday 11 March 2025 01:01:47 +0000 (0:00:00.232) 0:00:31.957 ********* 2025-03-11 01:01:48.134682 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:48.135870 | orchestrator | 2025-03-11 01:01:48.135911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:48.139426 | orchestrator | Tuesday 11 March 2025 01:01:48 +0000 (0:00:00.843) 0:00:32.801 ********* 2025-03-11 01:01:48.641776 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c66825c-4af4-4039-9be2-0884ea12c780) 2025-03-11 01:01:48.642874 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c66825c-4af4-4039-9be2-0884ea12c780) 2025-03-11 01:01:48.643696 | orchestrator | 2025-03-11 01:01:48.645623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:48.650185 | orchestrator | Tuesday 11 March 2025 01:01:48 +0000 (0:00:00.505) 0:00:33.306 ********* 2025-03-11 01:01:49.182971 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_663fc4ce-d59c-4a76-8f0a-41179b606a99) 2025-03-11 01:01:49.183813 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_663fc4ce-d59c-4a76-8f0a-41179b606a99) 2025-03-11 01:01:49.184435 | orchestrator | 2025-03-11 01:01:49.188303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:49.189238 | orchestrator | Tuesday 11 March 2025 01:01:49 +0000 (0:00:00.542) 0:00:33.849 ********* 2025-03-11 01:01:49.708081 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96f3a3bc-1bc2-4311-aa73-ad4d834104c1) 2025-03-11 01:01:49.710232 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96f3a3bc-1bc2-4311-aa73-ad4d834104c1) 2025-03-11 01:01:49.711087 | orchestrator | 2025-03-11 01:01:49.711126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:49.712628 | orchestrator | Tuesday 11 March 2025 01:01:49 +0000 (0:00:00.524) 0:00:34.374 ********* 2025-03-11 01:01:50.201070 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_35890ba3-27d1-4ca1-853f-43468bc69b0e) 2025-03-11 01:01:50.201671 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_35890ba3-27d1-4ca1-853f-43468bc69b0e) 2025-03-11 01:01:50.202276 | orchestrator | 2025-03-11 01:01:50.202793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:01:50.203444 | orchestrator | Tuesday 11 March 2025 01:01:50 +0000 (0:00:00.495) 0:00:34.869 ********* 2025-03-11 01:01:50.559183 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 01:01:50.560100 | orchestrator | 2025-03-11 01:01:50.560624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:50.561366 | orchestrator | Tuesday 11 March 2025 01:01:50 +0000 (0:00:00.358) 0:00:35.228 ********* 2025-03-11 01:01:51.104387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-11 01:01:51.106129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-11 01:01:51.106986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-11 01:01:51.107925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-11 01:01:51.108821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-11 01:01:51.110109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-11 01:01:51.111010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-11 01:01:51.111184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-11 01:01:51.111648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-11 01:01:51.112116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-11 01:01:51.113793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-11 01:01:51.113898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-11 01:01:51.113971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-11 01:01:51.113993 | orchestrator | 2025-03-11 01:01:51.114125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:51.114859 | orchestrator | Tuesday 11 March 2025 01:01:51 +0000 (0:00:00.542) 0:00:35.770 ********* 2025-03-11 01:01:51.328510 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:51.329807 | orchestrator | 2025-03-11 01:01:51.332712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:51.757557 | orchestrator | Tuesday 11 March 2025 01:01:51 +0000 (0:00:00.226) 0:00:35.996 ********* 2025-03-11 01:01:51.757653 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:51.757859 | orchestrator | 2025-03-11 01:01:51.757875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:51.758313 | orchestrator | Tuesday 11 March 2025 01:01:51 +0000 (0:00:00.428) 0:00:36.425 ********* 2025-03-11 01:01:52.004779 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:52.005923 | orchestrator | 2025-03-11 01:01:52.007132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:52.008198 | orchestrator | Tuesday 11 March 2025 01:01:51 +0000 (0:00:00.247) 0:00:36.672 ********* 2025-03-11 01:01:52.266335 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:52.266609 | orchestrator | 2025-03-11 01:01:52.267219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:52.267257 | orchestrator | Tuesday 11 March 2025 01:01:52 +0000 (0:00:00.261) 0:00:36.934 ********* 2025-03-11 01:01:52.521376 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:52.522011 | orchestrator | 2025-03-11 01:01:52.522352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:52.523143 | orchestrator | Tuesday 11 March 2025 01:01:52 +0000 (0:00:00.254) 0:00:37.189 ********* 2025-03-11 01:01:52.743687 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:52.748564 | orchestrator | 2025-03-11 01:01:52.749229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:52.749246 | orchestrator | Tuesday 11 March 2025 01:01:52 +0000 (0:00:00.221) 0:00:37.411 ********* 2025-03-11 01:01:52.977940 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:52.978927 | orchestrator | 2025-03-11 01:01:52.980547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:52.981130 | orchestrator | Tuesday 11 March 2025 01:01:52 +0000 (0:00:00.235) 0:00:37.646 ********* 2025-03-11 01:01:53.197988 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:53.199302 | orchestrator | 2025-03-11 01:01:53.199323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:53.200164 | orchestrator | Tuesday 11 March 2025 01:01:53 +0000 (0:00:00.219) 0:00:37.866 ********* 2025-03-11 01:01:53.986727 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-11 01:01:53.987441 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-11 01:01:53.987459 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-11 01:01:53.988747 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-11 01:01:53.989571 | orchestrator | 2025-03-11 01:01:53.990366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:53.991180 | orchestrator | Tuesday 11 March 2025 01:01:53 +0000 (0:00:00.787) 0:00:38.653 ********* 2025-03-11 01:01:54.260972 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:54.262062 | orchestrator | 2025-03-11 01:01:54.262778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:54.263143 | orchestrator | Tuesday 11 March 2025 01:01:54 +0000 (0:00:00.275) 0:00:38.928 ********* 2025-03-11 01:01:54.608188 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:54.608956 | orchestrator | 2025-03-11 01:01:54.610174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:54.610983 | orchestrator | Tuesday 11 March 2025 01:01:54 +0000 (0:00:00.348) 0:00:39.277 ********* 2025-03-11 01:01:54.837173 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:54.838786 | orchestrator | 2025-03-11 01:01:54.841609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:01:55.561727 | orchestrator | Tuesday 11 March 2025 01:01:54 +0000 (0:00:00.227) 0:00:39.504 ********* 2025-03-11 01:01:55.561851 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:55.562815 | orchestrator | 2025-03-11 01:01:55.564889 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-11 01:01:55.566260 | orchestrator | Tuesday 11 March 2025 01:01:55 +0000 (0:00:00.724) 0:00:40.229 ********* 2025-03-11 01:01:55.729863 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:55.730237 | orchestrator | 2025-03-11 01:01:55.730603 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-11 01:01:55.731380 | orchestrator | Tuesday 11 March 2025 01:01:55 +0000 (0:00:00.168) 0:00:40.397 ********* 2025-03-11 01:01:55.974203 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'afd03ade-fddc-513b-974e-73ae3400739d'}}) 2025-03-11 01:01:55.974633 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2c164e24-0081-5461-8b83-1ef82bb0535c'}}) 2025-03-11 01:01:55.975044 | orchestrator | 2025-03-11 01:01:55.975881 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-11 01:01:55.976573 | orchestrator | Tuesday 11 March 2025 01:01:55 +0000 (0:00:00.244) 0:00:40.642 ********* 2025-03-11 01:01:58.089527 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'}) 2025-03-11 01:01:58.089783 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'}) 2025-03-11 01:01:58.090420 | orchestrator | 2025-03-11 01:01:58.090457 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-11 01:01:58.279329 | orchestrator | Tuesday 11 March 2025 01:01:58 +0000 (0:00:02.111) 0:00:42.754 ********* 2025-03-11 01:01:58.279376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:01:58.280067 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:01:58.280970 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:58.281811 | orchestrator | 2025-03-11 01:01:58.282084 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-11 01:01:58.282709 | orchestrator | Tuesday 11 March 2025 01:01:58 +0000 (0:00:00.193) 0:00:42.947 ********* 2025-03-11 01:01:59.665902 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'}) 2025-03-11 01:01:59.666488 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'}) 2025-03-11 01:01:59.667097 | orchestrator | 2025-03-11 01:01:59.667714 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-11 01:01:59.668690 | orchestrator | Tuesday 11 March 2025 01:01:59 +0000 (0:00:01.386) 0:00:44.334 ********* 2025-03-11 01:01:59.863597 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:01:59.864814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:01:59.865115 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:01:59.865564 | orchestrator | 2025-03-11 01:01:59.866301 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-11 01:01:59.866658 | orchestrator | Tuesday 11 March 2025 01:01:59 +0000 (0:00:00.198) 0:00:44.532 ********* 2025-03-11 01:02:00.021434 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:00.021693 | orchestrator | 2025-03-11 01:02:00.021769 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-11 01:02:00.021868 | orchestrator | Tuesday 11 March 2025 01:02:00 +0000 (0:00:00.158) 0:00:44.690 ********* 2025-03-11 01:02:00.214958 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:00.218810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:00.218922 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:00.218944 | orchestrator | 2025-03-11 01:02:00.218964 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-11 01:02:00.219652 | orchestrator | Tuesday 11 March 2025 01:02:00 +0000 (0:00:00.191) 0:00:44.882 ********* 2025-03-11 01:02:00.602269 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:00.603192 | orchestrator | 2025-03-11 01:02:00.603261 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-11 01:02:00.604063 | orchestrator | Tuesday 11 March 2025 01:02:00 +0000 (0:00:00.386) 0:00:45.269 ********* 2025-03-11 01:02:00.799031 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:00.799754 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:00.800929 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:00.804083 | orchestrator | 2025-03-11 01:02:00.957771 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-11 01:02:00.957860 | orchestrator | Tuesday 11 March 2025 01:02:00 +0000 (0:00:00.198) 0:00:45.467 ********* 2025-03-11 01:02:00.957888 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:00.958746 | orchestrator | 2025-03-11 01:02:00.959799 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-11 01:02:00.960740 | orchestrator | Tuesday 11 March 2025 01:02:00 +0000 (0:00:00.157) 0:00:45.625 ********* 2025-03-11 01:02:01.177692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:01.178196 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:01.178240 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:01.179531 | orchestrator | 2025-03-11 01:02:01.180408 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-11 01:02:01.180763 | orchestrator | Tuesday 11 March 2025 01:02:01 +0000 (0:00:00.219) 0:00:45.845 ********* 2025-03-11 01:02:01.350200 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:01.350844 | orchestrator | 2025-03-11 01:02:01.350945 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-11 01:02:01.351879 | orchestrator | Tuesday 11 March 2025 01:02:01 +0000 (0:00:00.173) 0:00:46.018 ********* 2025-03-11 01:02:01.542155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:01.542309 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:01.542327 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:01.542346 | orchestrator | 2025-03-11 01:02:01.542820 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-11 01:02:01.542997 | orchestrator | Tuesday 11 March 2025 01:02:01 +0000 (0:00:00.191) 0:00:46.209 ********* 2025-03-11 01:02:01.741863 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:01.742934 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:01.743581 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:01.743749 | orchestrator | 2025-03-11 01:02:01.744728 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-11 01:02:01.745591 | orchestrator | Tuesday 11 March 2025 01:02:01 +0000 (0:00:00.201) 0:00:46.411 ********* 2025-03-11 01:02:01.930174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:01.930852 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:01.934272 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:01.934613 | orchestrator | 2025-03-11 01:02:01.934638 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-11 01:02:01.934656 | orchestrator | Tuesday 11 March 2025 01:02:01 +0000 (0:00:00.185) 0:00:46.596 ********* 2025-03-11 01:02:02.111091 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:02.112698 | orchestrator | 2025-03-11 01:02:02.112725 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-11 01:02:02.114268 | orchestrator | Tuesday 11 March 2025 01:02:02 +0000 (0:00:00.182) 0:00:46.779 ********* 2025-03-11 01:02:02.272321 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:02.273236 | orchestrator | 2025-03-11 01:02:02.274454 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-11 01:02:02.275354 | orchestrator | Tuesday 11 March 2025 01:02:02 +0000 (0:00:00.159) 0:00:46.938 ********* 2025-03-11 01:02:02.420947 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:02.421964 | orchestrator | 2025-03-11 01:02:02.423005 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-11 01:02:02.423954 | orchestrator | Tuesday 11 March 2025 01:02:02 +0000 (0:00:00.150) 0:00:47.089 ********* 2025-03-11 01:02:02.581176 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:02:02.583419 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-11 01:02:02.583642 | orchestrator | } 2025-03-11 01:02:02.584797 | orchestrator | 2025-03-11 01:02:02.585148 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-11 01:02:02.586071 | orchestrator | Tuesday 11 March 2025 01:02:02 +0000 (0:00:00.159) 0:00:47.248 ********* 2025-03-11 01:02:02.986137 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:02:02.987180 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-11 01:02:02.987946 | orchestrator | } 2025-03-11 01:02:02.989673 | orchestrator | 2025-03-11 01:02:02.990272 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-11 01:02:02.991634 | orchestrator | Tuesday 11 March 2025 01:02:02 +0000 (0:00:00.403) 0:00:47.651 ********* 2025-03-11 01:02:03.164920 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:02:03.165993 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-11 01:02:03.167160 | orchestrator | } 2025-03-11 01:02:03.167991 | orchestrator | 2025-03-11 01:02:03.168676 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-11 01:02:03.169757 | orchestrator | Tuesday 11 March 2025 01:02:03 +0000 (0:00:00.181) 0:00:47.833 ********* 2025-03-11 01:02:03.668775 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:03.668979 | orchestrator | 2025-03-11 01:02:03.669041 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-11 01:02:03.669723 | orchestrator | Tuesday 11 March 2025 01:02:03 +0000 (0:00:00.502) 0:00:48.336 ********* 2025-03-11 01:02:04.188557 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:04.189181 | orchestrator | 2025-03-11 01:02:04.189228 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-11 01:02:04.189637 | orchestrator | Tuesday 11 March 2025 01:02:04 +0000 (0:00:00.520) 0:00:48.856 ********* 2025-03-11 01:02:04.689748 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:04.690861 | orchestrator | 2025-03-11 01:02:04.691992 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-11 01:02:04.693161 | orchestrator | Tuesday 11 March 2025 01:02:04 +0000 (0:00:00.500) 0:00:49.356 ********* 2025-03-11 01:02:04.863363 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:04.864726 | orchestrator | 2025-03-11 01:02:04.864760 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-11 01:02:04.865836 | orchestrator | Tuesday 11 March 2025 01:02:04 +0000 (0:00:00.174) 0:00:49.530 ********* 2025-03-11 01:02:04.993373 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:04.994127 | orchestrator | 2025-03-11 01:02:04.995029 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-11 01:02:04.998810 | orchestrator | Tuesday 11 March 2025 01:02:04 +0000 (0:00:00.130) 0:00:49.661 ********* 2025-03-11 01:02:05.109272 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:05.109362 | orchestrator | 2025-03-11 01:02:05.110247 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-11 01:02:05.110663 | orchestrator | Tuesday 11 March 2025 01:02:05 +0000 (0:00:00.115) 0:00:49.776 ********* 2025-03-11 01:02:05.256897 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:02:05.257165 | orchestrator |  "vgs_report": { 2025-03-11 01:02:05.257443 | orchestrator |  "vg": [] 2025-03-11 01:02:05.259143 | orchestrator |  } 2025-03-11 01:02:05.261901 | orchestrator | } 2025-03-11 01:02:05.262257 | orchestrator | 2025-03-11 01:02:05.262286 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-11 01:02:05.266416 | orchestrator | Tuesday 11 March 2025 01:02:05 +0000 (0:00:00.147) 0:00:49.924 ********* 2025-03-11 01:02:05.424474 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:05.425317 | orchestrator | 2025-03-11 01:02:05.426057 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-11 01:02:05.427122 | orchestrator | Tuesday 11 March 2025 01:02:05 +0000 (0:00:00.167) 0:00:50.091 ********* 2025-03-11 01:02:05.567606 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:05.567722 | orchestrator | 2025-03-11 01:02:05.568133 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-11 01:02:05.569618 | orchestrator | Tuesday 11 March 2025 01:02:05 +0000 (0:00:00.144) 0:00:50.236 ********* 2025-03-11 01:02:05.936307 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:05.937345 | orchestrator | 2025-03-11 01:02:05.937381 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-11 01:02:05.937937 | orchestrator | Tuesday 11 March 2025 01:02:05 +0000 (0:00:00.366) 0:00:50.603 ********* 2025-03-11 01:02:06.101845 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:06.102213 | orchestrator | 2025-03-11 01:02:06.102248 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-11 01:02:06.102661 | orchestrator | Tuesday 11 March 2025 01:02:06 +0000 (0:00:00.166) 0:00:50.770 ********* 2025-03-11 01:02:06.254117 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:06.254224 | orchestrator | 2025-03-11 01:02:06.255688 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-11 01:02:06.256198 | orchestrator | Tuesday 11 March 2025 01:02:06 +0000 (0:00:00.150) 0:00:50.921 ********* 2025-03-11 01:02:06.421058 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:06.421480 | orchestrator | 2025-03-11 01:02:06.424136 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-11 01:02:06.573096 | orchestrator | Tuesday 11 March 2025 01:02:06 +0000 (0:00:00.165) 0:00:51.086 ********* 2025-03-11 01:02:06.573226 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:06.574358 | orchestrator | 2025-03-11 01:02:06.575391 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-11 01:02:06.575816 | orchestrator | Tuesday 11 March 2025 01:02:06 +0000 (0:00:00.153) 0:00:51.240 ********* 2025-03-11 01:02:06.735935 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:06.736947 | orchestrator | 2025-03-11 01:02:06.737792 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-11 01:02:06.739149 | orchestrator | Tuesday 11 March 2025 01:02:06 +0000 (0:00:00.163) 0:00:51.404 ********* 2025-03-11 01:02:06.894305 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:06.896647 | orchestrator | 2025-03-11 01:02:06.896686 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-11 01:02:06.899415 | orchestrator | Tuesday 11 March 2025 01:02:06 +0000 (0:00:00.157) 0:00:51.561 ********* 2025-03-11 01:02:07.050233 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:07.051332 | orchestrator | 2025-03-11 01:02:07.053166 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-11 01:02:07.055437 | orchestrator | Tuesday 11 March 2025 01:02:07 +0000 (0:00:00.156) 0:00:51.717 ********* 2025-03-11 01:02:07.197386 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:07.197972 | orchestrator | 2025-03-11 01:02:07.199761 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-11 01:02:07.200409 | orchestrator | Tuesday 11 March 2025 01:02:07 +0000 (0:00:00.147) 0:00:51.865 ********* 2025-03-11 01:02:07.354872 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:07.356987 | orchestrator | 2025-03-11 01:02:07.358917 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-11 01:02:07.497179 | orchestrator | Tuesday 11 March 2025 01:02:07 +0000 (0:00:00.156) 0:00:52.021 ********* 2025-03-11 01:02:07.497229 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:07.499374 | orchestrator | 2025-03-11 01:02:07.501123 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-11 01:02:07.503342 | orchestrator | Tuesday 11 March 2025 01:02:07 +0000 (0:00:00.143) 0:00:52.165 ********* 2025-03-11 01:02:07.651978 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:07.653142 | orchestrator | 2025-03-11 01:02:07.653778 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-11 01:02:07.654826 | orchestrator | Tuesday 11 March 2025 01:02:07 +0000 (0:00:00.154) 0:00:52.320 ********* 2025-03-11 01:02:08.080144 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:08.080767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:08.081871 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:08.082117 | orchestrator | 2025-03-11 01:02:08.083063 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-11 01:02:08.083939 | orchestrator | Tuesday 11 March 2025 01:02:08 +0000 (0:00:00.428) 0:00:52.748 ********* 2025-03-11 01:02:08.257385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:08.257496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:08.258389 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:08.258818 | orchestrator | 2025-03-11 01:02:08.259686 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-11 01:02:08.260347 | orchestrator | Tuesday 11 March 2025 01:02:08 +0000 (0:00:00.177) 0:00:52.926 ********* 2025-03-11 01:02:08.442350 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:08.442590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:08.442614 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:08.442628 | orchestrator | 2025-03-11 01:02:08.442645 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-11 01:02:08.442841 | orchestrator | Tuesday 11 March 2025 01:02:08 +0000 (0:00:00.182) 0:00:53.108 ********* 2025-03-11 01:02:08.609928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:08.610295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:08.610772 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:08.611265 | orchestrator | 2025-03-11 01:02:08.611670 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-11 01:02:08.612686 | orchestrator | Tuesday 11 March 2025 01:02:08 +0000 (0:00:00.169) 0:00:53.277 ********* 2025-03-11 01:02:08.795938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:08.796301 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:08.796724 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:08.797499 | orchestrator | 2025-03-11 01:02:08.798279 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-11 01:02:08.798543 | orchestrator | Tuesday 11 March 2025 01:02:08 +0000 (0:00:00.186) 0:00:53.464 ********* 2025-03-11 01:02:08.981363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:08.982775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:08.983567 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:08.984360 | orchestrator | 2025-03-11 01:02:08.984699 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-11 01:02:08.985564 | orchestrator | Tuesday 11 March 2025 01:02:08 +0000 (0:00:00.184) 0:00:53.649 ********* 2025-03-11 01:02:09.172486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:09.173969 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:09.174870 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:09.174898 | orchestrator | 2025-03-11 01:02:09.174916 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-11 01:02:09.174938 | orchestrator | Tuesday 11 March 2025 01:02:09 +0000 (0:00:00.190) 0:00:53.840 ********* 2025-03-11 01:02:09.404955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:09.407943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:09.409006 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:09.409031 | orchestrator | 2025-03-11 01:02:09.409050 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-11 01:02:09.410441 | orchestrator | Tuesday 11 March 2025 01:02:09 +0000 (0:00:00.228) 0:00:54.069 ********* 2025-03-11 01:02:09.936895 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:09.937277 | orchestrator | 2025-03-11 01:02:09.937310 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-11 01:02:09.937722 | orchestrator | Tuesday 11 March 2025 01:02:09 +0000 (0:00:00.536) 0:00:54.605 ********* 2025-03-11 01:02:10.509761 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:10.510079 | orchestrator | 2025-03-11 01:02:10.510279 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-11 01:02:10.512091 | orchestrator | Tuesday 11 March 2025 01:02:10 +0000 (0:00:00.567) 0:00:55.173 ********* 2025-03-11 01:02:10.948241 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:10.948943 | orchestrator | 2025-03-11 01:02:10.949547 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-11 01:02:10.950540 | orchestrator | Tuesday 11 March 2025 01:02:10 +0000 (0:00:00.442) 0:00:55.616 ********* 2025-03-11 01:02:11.144964 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'vg_name': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'}) 2025-03-11 01:02:11.145601 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'vg_name': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'}) 2025-03-11 01:02:11.145897 | orchestrator | 2025-03-11 01:02:11.146288 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-11 01:02:11.146631 | orchestrator | Tuesday 11 March 2025 01:02:11 +0000 (0:00:00.197) 0:00:55.813 ********* 2025-03-11 01:02:11.329559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:11.330334 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:11.331384 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:11.332425 | orchestrator | 2025-03-11 01:02:11.335043 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-11 01:02:11.507456 | orchestrator | Tuesday 11 March 2025 01:02:11 +0000 (0:00:00.184) 0:00:55.998 ********* 2025-03-11 01:02:11.507601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:11.508397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:11.509936 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:11.512057 | orchestrator | 2025-03-11 01:02:11.706244 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-11 01:02:11.706329 | orchestrator | Tuesday 11 March 2025 01:02:11 +0000 (0:00:00.176) 0:00:56.174 ********* 2025-03-11 01:02:11.706352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-afd03ade-fddc-513b-974e-73ae3400739d', 'data_vg': 'ceph-afd03ade-fddc-513b-974e-73ae3400739d'})  2025-03-11 01:02:11.706814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c', 'data_vg': 'ceph-2c164e24-0081-5461-8b83-1ef82bb0535c'})  2025-03-11 01:02:11.707057 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:11.707945 | orchestrator | 2025-03-11 01:02:11.708715 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-11 01:02:11.709457 | orchestrator | Tuesday 11 March 2025 01:02:11 +0000 (0:00:00.200) 0:00:56.375 ********* 2025-03-11 01:02:12.759701 | orchestrator | ok: [testbed-node-4] => { 2025-03-11 01:02:12.761690 | orchestrator |  "lvm_report": { 2025-03-11 01:02:12.762341 | orchestrator |  "lv": [ 2025-03-11 01:02:12.764682 | orchestrator |  { 2025-03-11 01:02:12.765366 | orchestrator |  "lv_name": "osd-block-2c164e24-0081-5461-8b83-1ef82bb0535c", 2025-03-11 01:02:12.765945 | orchestrator |  "vg_name": "ceph-2c164e24-0081-5461-8b83-1ef82bb0535c" 2025-03-11 01:02:12.766791 | orchestrator |  }, 2025-03-11 01:02:12.767084 | orchestrator |  { 2025-03-11 01:02:12.767736 | orchestrator |  "lv_name": "osd-block-afd03ade-fddc-513b-974e-73ae3400739d", 2025-03-11 01:02:12.768932 | orchestrator |  "vg_name": "ceph-afd03ade-fddc-513b-974e-73ae3400739d" 2025-03-11 01:02:12.769267 | orchestrator |  } 2025-03-11 01:02:12.770084 | orchestrator |  ], 2025-03-11 01:02:12.770284 | orchestrator |  "pv": [ 2025-03-11 01:02:12.771673 | orchestrator |  { 2025-03-11 01:02:12.772293 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-11 01:02:12.772318 | orchestrator |  "vg_name": "ceph-afd03ade-fddc-513b-974e-73ae3400739d" 2025-03-11 01:02:12.772922 | orchestrator |  }, 2025-03-11 01:02:12.773734 | orchestrator |  { 2025-03-11 01:02:12.775047 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-11 01:02:12.775567 | orchestrator |  "vg_name": "ceph-2c164e24-0081-5461-8b83-1ef82bb0535c" 2025-03-11 01:02:12.777009 | orchestrator |  } 2025-03-11 01:02:12.778437 | orchestrator |  ] 2025-03-11 01:02:12.779297 | orchestrator |  } 2025-03-11 01:02:12.779904 | orchestrator | } 2025-03-11 01:02:12.780627 | orchestrator | 2025-03-11 01:02:12.781416 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-11 01:02:12.782127 | orchestrator | 2025-03-11 01:02:12.782460 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-11 01:02:12.783188 | orchestrator | Tuesday 11 March 2025 01:02:12 +0000 (0:00:01.051) 0:00:57.427 ********* 2025-03-11 01:02:13.049469 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-11 01:02:13.049964 | orchestrator | 2025-03-11 01:02:13.050013 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-11 01:02:13.050680 | orchestrator | Tuesday 11 March 2025 01:02:13 +0000 (0:00:00.291) 0:00:57.718 ********* 2025-03-11 01:02:13.333299 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:13.334234 | orchestrator | 2025-03-11 01:02:13.334401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:13.334962 | orchestrator | Tuesday 11 March 2025 01:02:13 +0000 (0:00:00.283) 0:00:58.001 ********* 2025-03-11 01:02:13.839612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-11 01:02:13.840561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-11 01:02:13.841120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-11 01:02:13.841906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-11 01:02:13.842784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-11 01:02:13.843710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-11 01:02:13.844201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-11 01:02:13.845146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-11 01:02:13.845605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-11 01:02:13.846312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-11 01:02:13.846830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-11 01:02:13.847092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-11 01:02:13.848025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-11 01:02:13.848880 | orchestrator | 2025-03-11 01:02:13.849401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:13.849849 | orchestrator | Tuesday 11 March 2025 01:02:13 +0000 (0:00:00.505) 0:00:58.506 ********* 2025-03-11 01:02:14.064445 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:14.283733 | orchestrator | 2025-03-11 01:02:14.283837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:14.283857 | orchestrator | Tuesday 11 March 2025 01:02:14 +0000 (0:00:00.226) 0:00:58.732 ********* 2025-03-11 01:02:14.283888 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:14.284643 | orchestrator | 2025-03-11 01:02:14.289083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:14.498965 | orchestrator | Tuesday 11 March 2025 01:02:14 +0000 (0:00:00.218) 0:00:58.951 ********* 2025-03-11 01:02:14.499062 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:14.499621 | orchestrator | 2025-03-11 01:02:14.499646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:14.499667 | orchestrator | Tuesday 11 March 2025 01:02:14 +0000 (0:00:00.214) 0:00:59.165 ********* 2025-03-11 01:02:14.711111 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:14.938132 | orchestrator | 2025-03-11 01:02:14.938249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:14.938270 | orchestrator | Tuesday 11 March 2025 01:02:14 +0000 (0:00:00.212) 0:00:59.378 ********* 2025-03-11 01:02:14.938301 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:14.938766 | orchestrator | 2025-03-11 01:02:14.939007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:14.939835 | orchestrator | Tuesday 11 March 2025 01:02:14 +0000 (0:00:00.228) 0:00:59.607 ********* 2025-03-11 01:02:15.627463 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:15.628016 | orchestrator | 2025-03-11 01:02:15.629204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:15.629404 | orchestrator | Tuesday 11 March 2025 01:02:15 +0000 (0:00:00.687) 0:01:00.294 ********* 2025-03-11 01:02:15.860257 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:15.860383 | orchestrator | 2025-03-11 01:02:15.860586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:15.863327 | orchestrator | Tuesday 11 March 2025 01:02:15 +0000 (0:00:00.233) 0:01:00.528 ********* 2025-03-11 01:02:16.087709 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:16.088579 | orchestrator | 2025-03-11 01:02:16.088615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:16.089701 | orchestrator | Tuesday 11 March 2025 01:02:16 +0000 (0:00:00.227) 0:01:00.756 ********* 2025-03-11 01:02:16.592175 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89492e79-5deb-49f7-a1a7-185d4ce5c08c) 2025-03-11 01:02:16.592316 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89492e79-5deb-49f7-a1a7-185d4ce5c08c) 2025-03-11 01:02:16.593491 | orchestrator | 2025-03-11 01:02:16.593773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:16.594131 | orchestrator | Tuesday 11 March 2025 01:02:16 +0000 (0:00:00.503) 0:01:01.259 ********* 2025-03-11 01:02:17.068570 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a074a62-8b02-498d-8d1e-97a298b60d07) 2025-03-11 01:02:17.068750 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a074a62-8b02-498d-8d1e-97a298b60d07) 2025-03-11 01:02:17.069238 | orchestrator | 2025-03-11 01:02:17.069767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:17.069796 | orchestrator | Tuesday 11 March 2025 01:02:17 +0000 (0:00:00.476) 0:01:01.735 ********* 2025-03-11 01:02:17.533341 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7ad77790-9240-4bf7-8fbd-881e22f1e07b) 2025-03-11 01:02:17.533677 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7ad77790-9240-4bf7-8fbd-881e22f1e07b) 2025-03-11 01:02:17.535214 | orchestrator | 2025-03-11 01:02:17.536335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:17.537775 | orchestrator | Tuesday 11 March 2025 01:02:17 +0000 (0:00:00.465) 0:01:02.200 ********* 2025-03-11 01:02:18.035237 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6ce862c5-1280-46ce-a44b-7fdf993418a7) 2025-03-11 01:02:18.036806 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6ce862c5-1280-46ce-a44b-7fdf993418a7) 2025-03-11 01:02:18.036837 | orchestrator | 2025-03-11 01:02:18.036860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-11 01:02:18.037218 | orchestrator | Tuesday 11 March 2025 01:02:18 +0000 (0:00:00.502) 0:01:02.703 ********* 2025-03-11 01:02:18.561313 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-11 01:02:19.179280 | orchestrator | 2025-03-11 01:02:19.179403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:19.179423 | orchestrator | Tuesday 11 March 2025 01:02:18 +0000 (0:00:00.524) 0:01:03.227 ********* 2025-03-11 01:02:19.179456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-11 01:02:19.179884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-11 01:02:19.180960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-11 01:02:19.182102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-11 01:02:19.183107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-11 01:02:19.184378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-11 01:02:19.185164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-11 01:02:19.185854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-11 01:02:19.186659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-11 01:02:19.187431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-11 01:02:19.187866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-11 01:02:19.188021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-11 01:02:19.188459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-11 01:02:19.188903 | orchestrator | 2025-03-11 01:02:19.189172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:19.189736 | orchestrator | Tuesday 11 March 2025 01:02:19 +0000 (0:00:00.618) 0:01:03.846 ********* 2025-03-11 01:02:19.869093 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:19.869570 | orchestrator | 2025-03-11 01:02:19.870383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:19.870993 | orchestrator | Tuesday 11 March 2025 01:02:19 +0000 (0:00:00.690) 0:01:04.536 ********* 2025-03-11 01:02:20.087490 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:20.087718 | orchestrator | 2025-03-11 01:02:20.088424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:20.088675 | orchestrator | Tuesday 11 March 2025 01:02:20 +0000 (0:00:00.219) 0:01:04.756 ********* 2025-03-11 01:02:20.330658 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:20.331304 | orchestrator | 2025-03-11 01:02:20.332510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:20.333471 | orchestrator | Tuesday 11 March 2025 01:02:20 +0000 (0:00:00.242) 0:01:04.998 ********* 2025-03-11 01:02:20.577109 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:20.577941 | orchestrator | 2025-03-11 01:02:20.578176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:20.578947 | orchestrator | Tuesday 11 March 2025 01:02:20 +0000 (0:00:00.244) 0:01:05.243 ********* 2025-03-11 01:02:20.787231 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:20.787940 | orchestrator | 2025-03-11 01:02:20.789619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:20.791900 | orchestrator | Tuesday 11 March 2025 01:02:20 +0000 (0:00:00.212) 0:01:05.455 ********* 2025-03-11 01:02:21.022368 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:21.022634 | orchestrator | 2025-03-11 01:02:21.023818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:21.024846 | orchestrator | Tuesday 11 March 2025 01:02:21 +0000 (0:00:00.233) 0:01:05.688 ********* 2025-03-11 01:02:21.268172 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:21.268326 | orchestrator | 2025-03-11 01:02:21.269284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:21.269991 | orchestrator | Tuesday 11 March 2025 01:02:21 +0000 (0:00:00.246) 0:01:05.935 ********* 2025-03-11 01:02:21.490708 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:21.491680 | orchestrator | 2025-03-11 01:02:21.492720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:21.493555 | orchestrator | Tuesday 11 March 2025 01:02:21 +0000 (0:00:00.223) 0:01:06.158 ********* 2025-03-11 01:02:22.488760 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-11 01:02:22.489342 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-11 01:02:22.489380 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-11 01:02:22.490643 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-11 01:02:22.492383 | orchestrator | 2025-03-11 01:02:22.493126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:22.494096 | orchestrator | Tuesday 11 March 2025 01:02:22 +0000 (0:00:00.994) 0:01:07.153 ********* 2025-03-11 01:02:22.713282 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:22.713720 | orchestrator | 2025-03-11 01:02:22.714134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:22.714459 | orchestrator | Tuesday 11 March 2025 01:02:22 +0000 (0:00:00.227) 0:01:07.380 ********* 2025-03-11 01:02:23.469096 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:23.469753 | orchestrator | 2025-03-11 01:02:23.470126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:23.470156 | orchestrator | Tuesday 11 March 2025 01:02:23 +0000 (0:00:00.756) 0:01:08.137 ********* 2025-03-11 01:02:23.708848 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:23.709860 | orchestrator | 2025-03-11 01:02:23.709997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-11 01:02:23.710334 | orchestrator | Tuesday 11 March 2025 01:02:23 +0000 (0:00:00.236) 0:01:08.374 ********* 2025-03-11 01:02:23.924103 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:23.924584 | orchestrator | 2025-03-11 01:02:23.924812 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-11 01:02:23.925374 | orchestrator | Tuesday 11 March 2025 01:02:23 +0000 (0:00:00.218) 0:01:08.593 ********* 2025-03-11 01:02:24.087530 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:24.088371 | orchestrator | 2025-03-11 01:02:24.088981 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-11 01:02:24.090834 | orchestrator | Tuesday 11 March 2025 01:02:24 +0000 (0:00:00.161) 0:01:08.754 ********* 2025-03-11 01:02:24.331091 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '02208e85-9f55-5326-ae50-42694fdfd5d1'}}) 2025-03-11 01:02:24.331866 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}}) 2025-03-11 01:02:24.331970 | orchestrator | 2025-03-11 01:02:24.332433 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-11 01:02:24.332856 | orchestrator | Tuesday 11 March 2025 01:02:24 +0000 (0:00:00.245) 0:01:08.999 ********* 2025-03-11 01:02:26.458315 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'}) 2025-03-11 01:02:26.460156 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}) 2025-03-11 01:02:26.460679 | orchestrator | 2025-03-11 01:02:26.462154 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-11 01:02:26.462626 | orchestrator | Tuesday 11 March 2025 01:02:26 +0000 (0:00:02.125) 0:01:11.125 ********* 2025-03-11 01:02:26.642669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:26.642808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:26.643928 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:26.644389 | orchestrator | 2025-03-11 01:02:26.645448 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-11 01:02:26.646643 | orchestrator | Tuesday 11 March 2025 01:02:26 +0000 (0:00:00.185) 0:01:11.311 ********* 2025-03-11 01:02:27.978192 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'}) 2025-03-11 01:02:27.978719 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}) 2025-03-11 01:02:27.979787 | orchestrator | 2025-03-11 01:02:27.980931 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-11 01:02:27.981341 | orchestrator | Tuesday 11 March 2025 01:02:27 +0000 (0:00:01.329) 0:01:12.641 ********* 2025-03-11 01:02:28.163505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:28.164168 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:28.164783 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:28.165405 | orchestrator | 2025-03-11 01:02:28.166761 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-11 01:02:28.167145 | orchestrator | Tuesday 11 March 2025 01:02:28 +0000 (0:00:00.190) 0:01:12.831 ********* 2025-03-11 01:02:28.611285 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:28.611525 | orchestrator | 2025-03-11 01:02:28.613316 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-11 01:02:28.614157 | orchestrator | Tuesday 11 March 2025 01:02:28 +0000 (0:00:00.444) 0:01:13.275 ********* 2025-03-11 01:02:28.802427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:28.802704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:28.803499 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:28.803980 | orchestrator | 2025-03-11 01:02:28.805086 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-11 01:02:28.805727 | orchestrator | Tuesday 11 March 2025 01:02:28 +0000 (0:00:00.193) 0:01:13.468 ********* 2025-03-11 01:02:28.949900 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:28.951055 | orchestrator | 2025-03-11 01:02:28.952099 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-11 01:02:28.952983 | orchestrator | Tuesday 11 March 2025 01:02:28 +0000 (0:00:00.148) 0:01:13.617 ********* 2025-03-11 01:02:29.130749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:29.131460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:29.132087 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:29.132846 | orchestrator | 2025-03-11 01:02:29.133731 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-11 01:02:29.133936 | orchestrator | Tuesday 11 March 2025 01:02:29 +0000 (0:00:00.181) 0:01:13.798 ********* 2025-03-11 01:02:29.286074 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:29.293272 | orchestrator | 2025-03-11 01:02:29.293674 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-11 01:02:29.294699 | orchestrator | Tuesday 11 March 2025 01:02:29 +0000 (0:00:00.154) 0:01:13.953 ********* 2025-03-11 01:02:29.469445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:29.470771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:29.470812 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:29.471633 | orchestrator | 2025-03-11 01:02:29.472937 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-11 01:02:29.473135 | orchestrator | Tuesday 11 March 2025 01:02:29 +0000 (0:00:00.182) 0:01:14.135 ********* 2025-03-11 01:02:29.631902 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:29.632690 | orchestrator | 2025-03-11 01:02:29.632731 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-11 01:02:29.632809 | orchestrator | Tuesday 11 March 2025 01:02:29 +0000 (0:00:00.161) 0:01:14.297 ********* 2025-03-11 01:02:29.813173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:29.813740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:29.814069 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:29.814935 | orchestrator | 2025-03-11 01:02:29.815365 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-11 01:02:29.815873 | orchestrator | Tuesday 11 March 2025 01:02:29 +0000 (0:00:00.183) 0:01:14.481 ********* 2025-03-11 01:02:29.995344 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:29.996110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:29.997084 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:29.997934 | orchestrator | 2025-03-11 01:02:29.997967 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-11 01:02:29.998710 | orchestrator | Tuesday 11 March 2025 01:02:29 +0000 (0:00:00.181) 0:01:14.662 ********* 2025-03-11 01:02:30.172215 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:30.172376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:30.172666 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:30.172700 | orchestrator | 2025-03-11 01:02:30.173378 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-11 01:02:30.174277 | orchestrator | Tuesday 11 March 2025 01:02:30 +0000 (0:00:00.176) 0:01:14.839 ********* 2025-03-11 01:02:30.318331 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:30.318476 | orchestrator | 2025-03-11 01:02:30.319250 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-11 01:02:30.319847 | orchestrator | Tuesday 11 March 2025 01:02:30 +0000 (0:00:00.146) 0:01:14.986 ********* 2025-03-11 01:02:30.462478 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:30.464801 | orchestrator | 2025-03-11 01:02:30.464840 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-11 01:02:30.465743 | orchestrator | Tuesday 11 March 2025 01:02:30 +0000 (0:00:00.143) 0:01:15.130 ********* 2025-03-11 01:02:30.853765 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:30.853896 | orchestrator | 2025-03-11 01:02:30.854735 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-11 01:02:30.855041 | orchestrator | Tuesday 11 March 2025 01:02:30 +0000 (0:00:00.390) 0:01:15.521 ********* 2025-03-11 01:02:31.014820 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:02:31.015907 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-11 01:02:31.017460 | orchestrator | } 2025-03-11 01:02:31.018680 | orchestrator | 2025-03-11 01:02:31.019234 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-11 01:02:31.020200 | orchestrator | Tuesday 11 March 2025 01:02:31 +0000 (0:00:00.161) 0:01:15.682 ********* 2025-03-11 01:02:31.166396 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:02:31.167926 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-11 01:02:31.169390 | orchestrator | } 2025-03-11 01:02:31.170684 | orchestrator | 2025-03-11 01:02:31.171601 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-11 01:02:31.172310 | orchestrator | Tuesday 11 March 2025 01:02:31 +0000 (0:00:00.151) 0:01:15.834 ********* 2025-03-11 01:02:31.330988 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:02:31.331655 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-11 01:02:31.331690 | orchestrator | } 2025-03-11 01:02:31.331819 | orchestrator | 2025-03-11 01:02:31.332330 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-11 01:02:31.333249 | orchestrator | Tuesday 11 March 2025 01:02:31 +0000 (0:00:00.163) 0:01:15.998 ********* 2025-03-11 01:02:31.854504 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:31.858444 | orchestrator | 2025-03-11 01:02:31.858714 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-11 01:02:31.859709 | orchestrator | Tuesday 11 March 2025 01:02:31 +0000 (0:00:00.522) 0:01:16.521 ********* 2025-03-11 01:02:32.378261 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:32.378857 | orchestrator | 2025-03-11 01:02:32.379728 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-11 01:02:32.380306 | orchestrator | Tuesday 11 March 2025 01:02:32 +0000 (0:00:00.524) 0:01:17.046 ********* 2025-03-11 01:02:32.907921 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:32.908751 | orchestrator | 2025-03-11 01:02:32.909227 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-11 01:02:33.079983 | orchestrator | Tuesday 11 March 2025 01:02:32 +0000 (0:00:00.527) 0:01:17.574 ********* 2025-03-11 01:02:33.080081 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:33.080805 | orchestrator | 2025-03-11 01:02:33.082265 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-11 01:02:33.082803 | orchestrator | Tuesday 11 March 2025 01:02:33 +0000 (0:00:00.173) 0:01:17.747 ********* 2025-03-11 01:02:33.211753 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:33.212255 | orchestrator | 2025-03-11 01:02:33.213029 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-11 01:02:33.213805 | orchestrator | Tuesday 11 March 2025 01:02:33 +0000 (0:00:00.131) 0:01:17.879 ********* 2025-03-11 01:02:33.320217 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:33.322702 | orchestrator | 2025-03-11 01:02:33.327102 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-11 01:02:33.329952 | orchestrator | Tuesday 11 March 2025 01:02:33 +0000 (0:00:00.109) 0:01:17.988 ********* 2025-03-11 01:02:33.492210 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:02:33.492385 | orchestrator |  "vgs_report": { 2025-03-11 01:02:33.493142 | orchestrator |  "vg": [] 2025-03-11 01:02:33.493200 | orchestrator |  } 2025-03-11 01:02:33.494240 | orchestrator | } 2025-03-11 01:02:33.495278 | orchestrator | 2025-03-11 01:02:33.496163 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-11 01:02:33.496904 | orchestrator | Tuesday 11 March 2025 01:02:33 +0000 (0:00:00.169) 0:01:18.158 ********* 2025-03-11 01:02:33.872425 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.025220 | orchestrator | 2025-03-11 01:02:34.025339 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-11 01:02:34.025358 | orchestrator | Tuesday 11 March 2025 01:02:33 +0000 (0:00:00.380) 0:01:18.538 ********* 2025-03-11 01:02:34.025390 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.169494 | orchestrator | 2025-03-11 01:02:34.169648 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-11 01:02:34.169668 | orchestrator | Tuesday 11 March 2025 01:02:34 +0000 (0:00:00.154) 0:01:18.692 ********* 2025-03-11 01:02:34.169699 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.170841 | orchestrator | 2025-03-11 01:02:34.171604 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-11 01:02:34.172462 | orchestrator | Tuesday 11 March 2025 01:02:34 +0000 (0:00:00.145) 0:01:18.838 ********* 2025-03-11 01:02:34.364220 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.364746 | orchestrator | 2025-03-11 01:02:34.366107 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-11 01:02:34.366835 | orchestrator | Tuesday 11 March 2025 01:02:34 +0000 (0:00:00.194) 0:01:19.032 ********* 2025-03-11 01:02:34.518793 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.519413 | orchestrator | 2025-03-11 01:02:34.521146 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-11 01:02:34.522438 | orchestrator | Tuesday 11 March 2025 01:02:34 +0000 (0:00:00.154) 0:01:19.186 ********* 2025-03-11 01:02:34.679223 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.681544 | orchestrator | 2025-03-11 01:02:34.681772 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-11 01:02:34.683699 | orchestrator | Tuesday 11 March 2025 01:02:34 +0000 (0:00:00.158) 0:01:19.345 ********* 2025-03-11 01:02:34.833501 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.835239 | orchestrator | 2025-03-11 01:02:34.836433 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-11 01:02:34.838260 | orchestrator | Tuesday 11 March 2025 01:02:34 +0000 (0:00:00.156) 0:01:19.501 ********* 2025-03-11 01:02:34.992997 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:34.993793 | orchestrator | 2025-03-11 01:02:34.995848 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-11 01:02:34.997930 | orchestrator | Tuesday 11 March 2025 01:02:34 +0000 (0:00:00.159) 0:01:19.660 ********* 2025-03-11 01:02:35.146478 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:35.147354 | orchestrator | 2025-03-11 01:02:35.148604 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-11 01:02:35.149071 | orchestrator | Tuesday 11 March 2025 01:02:35 +0000 (0:00:00.153) 0:01:19.814 ********* 2025-03-11 01:02:35.305330 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:35.305728 | orchestrator | 2025-03-11 01:02:35.306226 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-11 01:02:35.308792 | orchestrator | Tuesday 11 March 2025 01:02:35 +0000 (0:00:00.156) 0:01:19.971 ********* 2025-03-11 01:02:35.450709 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:35.451213 | orchestrator | 2025-03-11 01:02:35.451912 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-11 01:02:35.452647 | orchestrator | Tuesday 11 March 2025 01:02:35 +0000 (0:00:00.147) 0:01:20.119 ********* 2025-03-11 01:02:35.597644 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:35.599208 | orchestrator | 2025-03-11 01:02:35.600646 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-11 01:02:35.603443 | orchestrator | Tuesday 11 March 2025 01:02:35 +0000 (0:00:00.146) 0:01:20.265 ********* 2025-03-11 01:02:35.976823 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:35.978830 | orchestrator | 2025-03-11 01:02:35.980002 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-11 01:02:36.143169 | orchestrator | Tuesday 11 March 2025 01:02:35 +0000 (0:00:00.377) 0:01:20.643 ********* 2025-03-11 01:02:36.143270 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:36.144498 | orchestrator | 2025-03-11 01:02:36.144807 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-11 01:02:36.145448 | orchestrator | Tuesday 11 March 2025 01:02:36 +0000 (0:00:00.168) 0:01:20.811 ********* 2025-03-11 01:02:36.320475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:36.321399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:36.321786 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:36.323510 | orchestrator | 2025-03-11 01:02:36.324955 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-11 01:02:36.326979 | orchestrator | Tuesday 11 March 2025 01:02:36 +0000 (0:00:00.176) 0:01:20.988 ********* 2025-03-11 01:02:36.513045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:36.513696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:36.514740 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:36.515601 | orchestrator | 2025-03-11 01:02:36.516044 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-11 01:02:36.516544 | orchestrator | Tuesday 11 March 2025 01:02:36 +0000 (0:00:00.193) 0:01:21.181 ********* 2025-03-11 01:02:36.703418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:36.704797 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:36.706134 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:36.707341 | orchestrator | 2025-03-11 01:02:36.708032 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-11 01:02:36.709079 | orchestrator | Tuesday 11 March 2025 01:02:36 +0000 (0:00:00.189) 0:01:21.371 ********* 2025-03-11 01:02:36.880997 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:36.881959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:36.883040 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:36.883915 | orchestrator | 2025-03-11 01:02:36.884853 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-11 01:02:36.885827 | orchestrator | Tuesday 11 March 2025 01:02:36 +0000 (0:00:00.176) 0:01:21.547 ********* 2025-03-11 01:02:37.069678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:37.070384 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:37.071221 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:37.072246 | orchestrator | 2025-03-11 01:02:37.072884 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-11 01:02:37.073360 | orchestrator | Tuesday 11 March 2025 01:02:37 +0000 (0:00:00.191) 0:01:21.738 ********* 2025-03-11 01:02:37.237075 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:37.238365 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:37.239938 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:37.240682 | orchestrator | 2025-03-11 01:02:37.241589 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-11 01:02:37.242640 | orchestrator | Tuesday 11 March 2025 01:02:37 +0000 (0:00:00.164) 0:01:21.903 ********* 2025-03-11 01:02:37.426357 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:37.426573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:37.427329 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:37.428118 | orchestrator | 2025-03-11 01:02:37.428454 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-11 01:02:37.428932 | orchestrator | Tuesday 11 March 2025 01:02:37 +0000 (0:00:00.191) 0:01:22.094 ********* 2025-03-11 01:02:37.601663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:37.601826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:37.602361 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:37.602609 | orchestrator | 2025-03-11 01:02:37.602996 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-11 01:02:37.603499 | orchestrator | Tuesday 11 March 2025 01:02:37 +0000 (0:00:00.175) 0:01:22.270 ********* 2025-03-11 01:02:38.143689 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:38.144814 | orchestrator | 2025-03-11 01:02:38.145754 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-11 01:02:38.146650 | orchestrator | Tuesday 11 March 2025 01:02:38 +0000 (0:00:00.538) 0:01:22.809 ********* 2025-03-11 01:02:38.892440 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:38.893155 | orchestrator | 2025-03-11 01:02:38.893455 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-11 01:02:38.894138 | orchestrator | Tuesday 11 March 2025 01:02:38 +0000 (0:00:00.751) 0:01:23.560 ********* 2025-03-11 01:02:39.067944 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:39.068813 | orchestrator | 2025-03-11 01:02:39.068966 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-11 01:02:39.069040 | orchestrator | Tuesday 11 March 2025 01:02:39 +0000 (0:00:00.173) 0:01:23.734 ********* 2025-03-11 01:02:39.260636 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'vg_name': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'}) 2025-03-11 01:02:39.261491 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'vg_name': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'}) 2025-03-11 01:02:39.261806 | orchestrator | 2025-03-11 01:02:39.262493 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-11 01:02:39.263200 | orchestrator | Tuesday 11 March 2025 01:02:39 +0000 (0:00:00.194) 0:01:23.929 ********* 2025-03-11 01:02:39.484072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:39.485765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:39.488768 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:39.488825 | orchestrator | 2025-03-11 01:02:39.488841 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-11 01:02:39.488859 | orchestrator | Tuesday 11 March 2025 01:02:39 +0000 (0:00:00.218) 0:01:24.147 ********* 2025-03-11 01:02:39.683023 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:39.683968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:39.684724 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:39.685101 | orchestrator | 2025-03-11 01:02:39.686462 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-11 01:02:39.687694 | orchestrator | Tuesday 11 March 2025 01:02:39 +0000 (0:00:00.201) 0:01:24.349 ********* 2025-03-11 01:02:39.895237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1', 'data_vg': 'ceph-02208e85-9f55-5326-ae50-42694fdfd5d1'})  2025-03-11 01:02:39.896081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1', 'data_vg': 'ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1'})  2025-03-11 01:02:39.896113 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:39.897362 | orchestrator | 2025-03-11 01:02:39.897780 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-11 01:02:39.899076 | orchestrator | Tuesday 11 March 2025 01:02:39 +0000 (0:00:00.209) 0:01:24.558 ********* 2025-03-11 01:02:40.381702 | orchestrator | ok: [testbed-node-5] => { 2025-03-11 01:02:40.382323 | orchestrator |  "lvm_report": { 2025-03-11 01:02:40.382872 | orchestrator |  "lv": [ 2025-03-11 01:02:40.382904 | orchestrator |  { 2025-03-11 01:02:40.383627 | orchestrator |  "lv_name": "osd-block-02208e85-9f55-5326-ae50-42694fdfd5d1", 2025-03-11 01:02:40.384286 | orchestrator |  "vg_name": "ceph-02208e85-9f55-5326-ae50-42694fdfd5d1" 2025-03-11 01:02:40.385728 | orchestrator |  }, 2025-03-11 01:02:40.385926 | orchestrator |  { 2025-03-11 01:02:40.385950 | orchestrator |  "lv_name": "osd-block-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1", 2025-03-11 01:02:40.385969 | orchestrator |  "vg_name": "ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1" 2025-03-11 01:02:40.386497 | orchestrator |  } 2025-03-11 01:02:40.386884 | orchestrator |  ], 2025-03-11 01:02:40.387532 | orchestrator |  "pv": [ 2025-03-11 01:02:40.387998 | orchestrator |  { 2025-03-11 01:02:40.388869 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-11 01:02:40.389484 | orchestrator |  "vg_name": "ceph-02208e85-9f55-5326-ae50-42694fdfd5d1" 2025-03-11 01:02:40.389925 | orchestrator |  }, 2025-03-11 01:02:40.390873 | orchestrator |  { 2025-03-11 01:02:40.391170 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-11 01:02:40.392236 | orchestrator |  "vg_name": "ceph-3d7e3469-34ac-59ca-aeff-ca5cbcd2cfb1" 2025-03-11 01:02:40.392338 | orchestrator |  } 2025-03-11 01:02:40.393240 | orchestrator |  ] 2025-03-11 01:02:40.393584 | orchestrator |  } 2025-03-11 01:02:40.393975 | orchestrator | } 2025-03-11 01:02:40.394631 | orchestrator | 2025-03-11 01:02:40.396268 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:02:40.396748 | orchestrator | 2025-03-11 01:02:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 01:02:40.396839 | orchestrator | 2025-03-11 01:02:40 | INFO  | Please wait and do not abort execution. 2025-03-11 01:02:40.397973 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-11 01:02:40.398209 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-11 01:02:40.399207 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-11 01:02:40.399386 | orchestrator | 2025-03-11 01:02:40.400108 | orchestrator | 2025-03-11 01:02:40.401281 | orchestrator | 2025-03-11 01:02:40.402267 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:02:40.402693 | orchestrator | Tuesday 11 March 2025 01:02:40 +0000 (0:00:00.490) 0:01:25.049 ********* 2025-03-11 01:02:40.403369 | orchestrator | =============================================================================== 2025-03-11 01:02:40.403848 | orchestrator | Create block VGs -------------------------------------------------------- 6.73s 2025-03-11 01:02:40.404932 | orchestrator | Create block LVs -------------------------------------------------------- 4.23s 2025-03-11 01:02:40.406090 | orchestrator | Print LVM report data --------------------------------------------------- 2.28s 2025-03-11 01:02:40.406828 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.08s 2025-03-11 01:02:40.407637 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.92s 2025-03-11 01:02:40.407859 | orchestrator | Add known links to the list of available block devices ------------------ 1.88s 2025-03-11 01:02:40.408661 | orchestrator | Add known partitions to the list of available block devices ------------- 1.72s 2025-03-11 01:02:40.409150 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.66s 2025-03-11 01:02:40.409593 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.60s 2025-03-11 01:02:40.410790 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2025-03-11 01:02:40.410959 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.26s 2025-03-11 01:02:40.413339 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-03-11 01:02:40.414117 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.99s 2025-03-11 01:02:40.414773 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2025-03-11 01:02:40.415306 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-03-11 01:02:40.415730 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.80s 2025-03-11 01:02:40.416362 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.79s 2025-03-11 01:02:40.416757 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.79s 2025-03-11 01:02:40.417028 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-03-11 01:02:40.417611 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.78s 2025-03-11 01:02:42.900402 | orchestrator | 2025-03-11 01:02:42 | INFO  | Task bd37018e-fccd-49a4-8d79-f3d394bf46c5 (facts) was prepared for execution. 2025-03-11 01:02:46.673504 | orchestrator | 2025-03-11 01:02:42 | INFO  | It takes a moment until task bd37018e-fccd-49a4-8d79-f3d394bf46c5 (facts) has been started and output is visible here. 2025-03-11 01:02:46.673681 | orchestrator | 2025-03-11 01:02:46.674219 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-11 01:02:46.675379 | orchestrator | 2025-03-11 01:02:46.680887 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-11 01:02:48.238568 | orchestrator | Tuesday 11 March 2025 01:02:46 +0000 (0:00:00.237) 0:00:00.238 ********* 2025-03-11 01:02:48.238750 | orchestrator | ok: [testbed-manager] 2025-03-11 01:02:48.241717 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:02:48.242443 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:02:48.243694 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:48.244893 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:02:48.246086 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:02:48.246258 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:48.247029 | orchestrator | 2025-03-11 01:02:48.247464 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-11 01:02:48.250485 | orchestrator | Tuesday 11 March 2025 01:02:48 +0000 (0:00:01.561) 0:00:01.799 ********* 2025-03-11 01:02:48.453205 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:02:48.540549 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:02:48.621349 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:02:48.707858 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:02:48.796766 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:02:49.849300 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:49.849477 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:49.850900 | orchestrator | 2025-03-11 01:02:49.851741 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-11 01:02:49.852515 | orchestrator | 2025-03-11 01:02:49.853431 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-11 01:02:49.854120 | orchestrator | Tuesday 11 March 2025 01:02:49 +0000 (0:00:01.616) 0:00:03.416 ********* 2025-03-11 01:02:54.698114 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:02:54.698299 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:02:54.700089 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:02:54.700906 | orchestrator | ok: [testbed-manager] 2025-03-11 01:02:54.700939 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:02:54.701802 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:02:54.702889 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:02:54.703733 | orchestrator | 2025-03-11 01:02:54.704371 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-11 01:02:54.704853 | orchestrator | 2025-03-11 01:02:54.705770 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-11 01:02:54.706522 | orchestrator | Tuesday 11 March 2025 01:02:54 +0000 (0:00:04.848) 0:00:08.264 ********* 2025-03-11 01:02:54.879908 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:02:54.971886 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:02:55.061976 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:02:55.153817 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:02:55.241914 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:02:55.286383 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:02:55.286851 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:02:55.287640 | orchestrator | 2025-03-11 01:02:55.288722 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:02:55.289145 | orchestrator | 2025-03-11 01:02:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-11 01:02:55.289440 | orchestrator | 2025-03-11 01:02:55 | INFO  | Please wait and do not abort execution. 2025-03-11 01:02:55.289473 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:02:55.289907 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:02:55.290822 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:02:55.291189 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:02:55.291580 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:02:55.291740 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:02:55.292143 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:02:55.292881 | orchestrator | 2025-03-11 01:02:55.293225 | orchestrator | 2025-03-11 01:02:55.293787 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:02:55.293880 | orchestrator | Tuesday 11 March 2025 01:02:55 +0000 (0:00:00.590) 0:00:08.854 ********* 2025-03-11 01:02:55.294281 | orchestrator | =============================================================================== 2025-03-11 01:02:55.296073 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2025-03-11 01:02:55.296300 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.62s 2025-03-11 01:02:55.296529 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.56s 2025-03-11 01:02:55.296932 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-03-11 01:02:55.993460 | orchestrator | 2025-03-11 01:02:55.996812 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 11 01:02:55 UTC 2025 2025-03-11 01:02:57.600362 | orchestrator | 2025-03-11 01:02:57.600485 | orchestrator | 2025-03-11 01:02:57 | INFO  | Collection nutshell is prepared for execution 2025-03-11 01:02:57.604977 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [0] - dotfiles 2025-03-11 01:02:57.605016 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [0] - homer 2025-03-11 01:02:57.606360 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [0] - netdata 2025-03-11 01:02:57.606387 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [0] - openstackclient 2025-03-11 01:02:57.606402 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [0] - phpmyadmin 2025-03-11 01:02:57.606416 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [0] - common 2025-03-11 01:02:57.606493 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [1] -- loadbalancer 2025-03-11 01:02:57.606573 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [2] --- opensearch 2025-03-11 01:02:57.606623 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [2] --- mariadb-ng 2025-03-11 01:02:57.606639 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [3] ---- horizon 2025-03-11 01:02:57.606653 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [3] ---- keystone 2025-03-11 01:02:57.606671 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [4] ----- neutron 2025-03-11 01:02:57.607578 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [5] ------ wait-for-nova 2025-03-11 01:02:57.607634 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [5] ------ octavia 2025-03-11 01:02:57.607773 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [4] ----- barbican 2025-03-11 01:02:57.607793 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [4] ----- designate 2025-03-11 01:02:57.607808 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [4] ----- ironic 2025-03-11 01:02:57.607822 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [4] ----- placement 2025-03-11 01:02:57.607840 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [4] ----- magnum 2025-03-11 01:02:57.607890 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [1] -- openvswitch 2025-03-11 01:02:57.607907 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [2] --- ovn 2025-03-11 01:02:57.607922 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [1] -- memcached 2025-03-11 01:02:57.607936 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [1] -- redis 2025-03-11 01:02:57.607978 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [1] -- rabbitmq-ng 2025-03-11 01:02:57.607993 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [0] - kubernetes 2025-03-11 01:02:57.608011 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [1] -- kubeconfig 2025-03-11 01:02:57.608107 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [1] -- copy-kubeconfig 2025-03-11 01:02:57.609613 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [0] - ceph 2025-03-11 01:02:57.609645 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [1] -- ceph-pools 2025-03-11 01:02:57.609733 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [2] --- copy-ceph-keys 2025-03-11 01:02:57.609753 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [3] ---- cephclient 2025-03-11 01:02:57.609771 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-03-11 01:02:57.609868 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [4] ----- wait-for-keystone 2025-03-11 01:02:57.609888 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [5] ------ kolla-ceph-rgw 2025-03-11 01:02:57.609902 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [5] ------ glance 2025-03-11 01:02:57.609920 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [5] ------ cinder 2025-03-11 01:02:57.610149 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [5] ------ nova 2025-03-11 01:02:57.610180 | orchestrator | 2025-03-11 01:02:57 | INFO  | A [4] ----- prometheus 2025-03-11 01:02:57.751416 | orchestrator | 2025-03-11 01:02:57 | INFO  | D [5] ------ grafana 2025-03-11 01:02:57.751504 | orchestrator | 2025-03-11 01:02:57 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-03-11 01:02:59.979440 | orchestrator | 2025-03-11 01:02:57 | INFO  | Tasks are running in the background 2025-03-11 01:02:59.979564 | orchestrator | 2025-03-11 01:02:59 | INFO  | No task IDs specified, wait for all currently running tasks 2025-03-11 01:03:02.093140 | orchestrator | 2025-03-11 01:03:02 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:02.094851 | orchestrator | 2025-03-11 01:03:02 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:02.096694 | orchestrator | 2025-03-11 01:03:02 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:02.097877 | orchestrator | 2025-03-11 01:03:02 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:02.098398 | orchestrator | 2025-03-11 01:03:02 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:02.102773 | orchestrator | 2025-03-11 01:03:02 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:05.152247 | orchestrator | 2025-03-11 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:05.152380 | orchestrator | 2025-03-11 01:03:05 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:05.156956 | orchestrator | 2025-03-11 01:03:05 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:05.157453 | orchestrator | 2025-03-11 01:03:05 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:05.158394 | orchestrator | 2025-03-11 01:03:05 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:05.162177 | orchestrator | 2025-03-11 01:03:05 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:05.165946 | orchestrator | 2025-03-11 01:03:05 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:08.230568 | orchestrator | 2025-03-11 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:08.230780 | orchestrator | 2025-03-11 01:03:08 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:08.232043 | orchestrator | 2025-03-11 01:03:08 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:08.232096 | orchestrator | 2025-03-11 01:03:08 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:08.232486 | orchestrator | 2025-03-11 01:03:08 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:08.240152 | orchestrator | 2025-03-11 01:03:08 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:11.294112 | orchestrator | 2025-03-11 01:03:08 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:11.294254 | orchestrator | 2025-03-11 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:11.294293 | orchestrator | 2025-03-11 01:03:11 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:11.298696 | orchestrator | 2025-03-11 01:03:11 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:11.299917 | orchestrator | 2025-03-11 01:03:11 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:11.303863 | orchestrator | 2025-03-11 01:03:11 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:11.304861 | orchestrator | 2025-03-11 01:03:11 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:11.306779 | orchestrator | 2025-03-11 01:03:11 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:11.306926 | orchestrator | 2025-03-11 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:14.387114 | orchestrator | 2025-03-11 01:03:14 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:14.388039 | orchestrator | 2025-03-11 01:03:14 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:14.389994 | orchestrator | 2025-03-11 01:03:14 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:14.393118 | orchestrator | 2025-03-11 01:03:14 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:14.397439 | orchestrator | 2025-03-11 01:03:14 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:14.399169 | orchestrator | 2025-03-11 01:03:14 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:14.401894 | orchestrator | 2025-03-11 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:17.514529 | orchestrator | 2025-03-11 01:03:17 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:17.519340 | orchestrator | 2025-03-11 01:03:17 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:17.519381 | orchestrator | 2025-03-11 01:03:17 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:17.519399 | orchestrator | 2025-03-11 01:03:17 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:17.519414 | orchestrator | 2025-03-11 01:03:17 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:17.519438 | orchestrator | 2025-03-11 01:03:17 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:20.603438 | orchestrator | 2025-03-11 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:20.603566 | orchestrator | 2025-03-11 01:03:20 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:20.611091 | orchestrator | 2025-03-11 01:03:20 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:20.611132 | orchestrator | 2025-03-11 01:03:20 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:23.698843 | orchestrator | 2025-03-11 01:03:20 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:23.698966 | orchestrator | 2025-03-11 01:03:20 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:23.698984 | orchestrator | 2025-03-11 01:03:20 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:23.699013 | orchestrator | 2025-03-11 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:23.699047 | orchestrator | 2025-03-11 01:03:23 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:23.711750 | orchestrator | 2025-03-11 01:03:23 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:23.711825 | orchestrator | 2025-03-11 01:03:23 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:23.714922 | orchestrator | 2025-03-11 01:03:23 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:23.725919 | orchestrator | 2025-03-11 01:03:23 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:23.732357 | orchestrator | 2025-03-11 01:03:23 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:26.807571 | orchestrator | 2025-03-11 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:26.807752 | orchestrator | 2025-03-11 01:03:26 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:26.815609 | orchestrator | 2025-03-11 01:03:26 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:26.823860 | orchestrator | 2025-03-11 01:03:26 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:26.828300 | orchestrator | 2025-03-11 01:03:26 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:26.828338 | orchestrator | 2025-03-11 01:03:26 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:29.905813 | orchestrator | 2025-03-11 01:03:26 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state STARTED 2025-03-11 01:03:29.905928 | orchestrator | 2025-03-11 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:29.905963 | orchestrator | 2025-03-11 01:03:29 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:29.908482 | orchestrator | 2025-03-11 01:03:29 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:29.912578 | orchestrator | 2025-03-11 01:03:29 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:29.914286 | orchestrator | 2025-03-11 01:03:29 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:29.921062 | orchestrator | 2025-03-11 01:03:29 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:29.924679 | orchestrator | 2025-03-11 01:03:29 | INFO  | Task 4a681cd8-e4aa-4d45-bf6a-67a37266a2e8 is in state SUCCESS 2025-03-11 01:03:29.924768 | orchestrator | 2025-03-11 01:03:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:29.924802 | orchestrator | 2025-03-11 01:03:29.924819 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-03-11 01:03:29.924834 | orchestrator | 2025-03-11 01:03:29.924849 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-03-11 01:03:29.924871 | orchestrator | Tuesday 11 March 2025 01:03:08 +0000 (0:00:00.384) 0:00:00.384 ********* 2025-03-11 01:03:29.924886 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:03:29.924901 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:03:29.924915 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:03:29.924930 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:03:29.924944 | orchestrator | changed: [testbed-manager] 2025-03-11 01:03:29.924958 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:03:29.924971 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:03:29.924986 | orchestrator | 2025-03-11 01:03:29.925000 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-03-11 01:03:29.925014 | orchestrator | Tuesday 11 March 2025 01:03:11 +0000 (0:00:03.713) 0:00:04.098 ********* 2025-03-11 01:03:29.925029 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-11 01:03:29.925048 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-11 01:03:29.925063 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-11 01:03:29.925077 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-11 01:03:29.925091 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-11 01:03:29.925104 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-11 01:03:29.925118 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-11 01:03:29.925132 | orchestrator | 2025-03-11 01:03:29.925146 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-03-11 01:03:29.925161 | orchestrator | Tuesday 11 March 2025 01:03:15 +0000 (0:00:03.844) 0:00:07.943 ********* 2025-03-11 01:03:29.925177 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:03:12.717982', 'end': '2025-03-11 01:03:12.727408', 'delta': '0:00:00.009426', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:03:29.925224 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:03:12.716332', 'end': '2025-03-11 01:03:12.724788', 'delta': '0:00:00.008456', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:03:29.925242 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:03:13.292756', 'end': '2025-03-11 01:03:13.301101', 'delta': '0:00:00.008345', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:03:29.925286 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:03:13.640796', 'end': '2025-03-11 01:03:13.649868', 'delta': '0:00:00.009072', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:03:29.925304 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:03:13.987566', 'end': '2025-03-11 01:03:13.998376', 'delta': '0:00:00.010810', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:03:29.925320 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:03:14.348712', 'end': '2025-03-11 01:03:14.357717', 'delta': '0:00:00.009005', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:03:29.925349 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-11 01:03:14.968260', 'end': '2025-03-11 01:03:14.977381', 'delta': '0:00:00.009121', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-11 01:03:29.925365 | orchestrator | 2025-03-11 01:03:29.925381 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-03-11 01:03:29.925397 | orchestrator | Tuesday 11 March 2025 01:03:20 +0000 (0:00:05.068) 0:00:13.011 ********* 2025-03-11 01:03:29.925413 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-11 01:03:29.925429 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-11 01:03:29.925444 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-11 01:03:29.925460 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-11 01:03:29.925476 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-11 01:03:29.925491 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-11 01:03:29.925507 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-11 01:03:29.925523 | orchestrator | 2025-03-11 01:03:29.925539 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-03-11 01:03:29.925555 | orchestrator | Tuesday 11 March 2025 01:03:24 +0000 (0:00:03.327) 0:00:16.339 ********* 2025-03-11 01:03:29.925571 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-03-11 01:03:29.925585 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-03-11 01:03:29.925599 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-03-11 01:03:29.925613 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-03-11 01:03:29.925626 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-03-11 01:03:29.925662 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-03-11 01:03:29.925677 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-03-11 01:03:29.925691 | orchestrator | 2025-03-11 01:03:29.925705 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:03:29.925726 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:03:33.004072 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:03:33.004186 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:03:33.004205 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:03:33.004255 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:03:33.004272 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:03:33.004287 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:03:33.004302 | orchestrator | 2025-03-11 01:03:33.004317 | orchestrator | 2025-03-11 01:03:33.004332 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:03:33.004347 | orchestrator | Tuesday 11 March 2025 01:03:28 +0000 (0:00:04.822) 0:00:21.162 ********* 2025-03-11 01:03:33.004361 | orchestrator | =============================================================================== 2025-03-11 01:03:33.004375 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 5.07s 2025-03-11 01:03:33.004390 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.82s 2025-03-11 01:03:33.004404 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.84s 2025-03-11 01:03:33.004418 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.71s 2025-03-11 01:03:33.004433 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.33s 2025-03-11 01:03:33.004463 | orchestrator | 2025-03-11 01:03:32 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:33.012401 | orchestrator | 2025-03-11 01:03:33 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:33.016795 | orchestrator | 2025-03-11 01:03:33 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:33.027062 | orchestrator | 2025-03-11 01:03:33 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:33.033145 | orchestrator | 2025-03-11 01:03:33 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:33.039001 | orchestrator | 2025-03-11 01:03:33 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:36.120028 | orchestrator | 2025-03-11 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:36.120164 | orchestrator | 2025-03-11 01:03:36 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:36.122292 | orchestrator | 2025-03-11 01:03:36 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:36.124909 | orchestrator | 2025-03-11 01:03:36 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:36.129139 | orchestrator | 2025-03-11 01:03:36 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:36.130549 | orchestrator | 2025-03-11 01:03:36 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:36.132278 | orchestrator | 2025-03-11 01:03:36 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:36.132824 | orchestrator | 2025-03-11 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:39.203526 | orchestrator | 2025-03-11 01:03:39 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:39.207151 | orchestrator | 2025-03-11 01:03:39 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:39.207550 | orchestrator | 2025-03-11 01:03:39 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:39.211208 | orchestrator | 2025-03-11 01:03:39 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:39.213351 | orchestrator | 2025-03-11 01:03:39 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:39.215513 | orchestrator | 2025-03-11 01:03:39 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:39.215783 | orchestrator | 2025-03-11 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:42.327848 | orchestrator | 2025-03-11 01:03:42 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:42.329365 | orchestrator | 2025-03-11 01:03:42 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:42.331705 | orchestrator | 2025-03-11 01:03:42 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:42.334681 | orchestrator | 2025-03-11 01:03:42 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:42.335325 | orchestrator | 2025-03-11 01:03:42 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:42.336601 | orchestrator | 2025-03-11 01:03:42 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:45.420360 | orchestrator | 2025-03-11 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:45.421217 | orchestrator | 2025-03-11 01:03:45 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:45.426287 | orchestrator | 2025-03-11 01:03:45 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:45.427972 | orchestrator | 2025-03-11 01:03:45 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:45.430401 | orchestrator | 2025-03-11 01:03:45 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:45.437162 | orchestrator | 2025-03-11 01:03:45 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:45.440370 | orchestrator | 2025-03-11 01:03:45 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:45.441057 | orchestrator | 2025-03-11 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:48.513926 | orchestrator | 2025-03-11 01:03:48 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:51.642126 | orchestrator | 2025-03-11 01:03:48 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:51.642243 | orchestrator | 2025-03-11 01:03:48 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:51.642263 | orchestrator | 2025-03-11 01:03:48 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:51.642278 | orchestrator | 2025-03-11 01:03:48 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:51.642293 | orchestrator | 2025-03-11 01:03:48 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:51.642308 | orchestrator | 2025-03-11 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:51.642340 | orchestrator | 2025-03-11 01:03:51 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:51.645954 | orchestrator | 2025-03-11 01:03:51 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state STARTED 2025-03-11 01:03:51.645987 | orchestrator | 2025-03-11 01:03:51 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:51.646002 | orchestrator | 2025-03-11 01:03:51 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:51.646089 | orchestrator | 2025-03-11 01:03:51 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:51.647161 | orchestrator | 2025-03-11 01:03:51 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:51.647198 | orchestrator | 2025-03-11 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:54.720265 | orchestrator | 2025-03-11 01:03:54 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:54.723007 | orchestrator | 2025-03-11 01:03:54 | INFO  | Task d700c631-8249-4a5e-9318-14156a0ee2f1 is in state SUCCESS 2025-03-11 01:03:54.723058 | orchestrator | 2025-03-11 01:03:54 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:03:54.726733 | orchestrator | 2025-03-11 01:03:54 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:54.730352 | orchestrator | 2025-03-11 01:03:54 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:54.733145 | orchestrator | 2025-03-11 01:03:54 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:54.736377 | orchestrator | 2025-03-11 01:03:54 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:03:57.799916 | orchestrator | 2025-03-11 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:03:57.800059 | orchestrator | 2025-03-11 01:03:57 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state STARTED 2025-03-11 01:03:57.800155 | orchestrator | 2025-03-11 01:03:57 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:03:57.801023 | orchestrator | 2025-03-11 01:03:57 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:03:57.801979 | orchestrator | 2025-03-11 01:03:57 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:03:57.802836 | orchestrator | 2025-03-11 01:03:57 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:03:57.804369 | orchestrator | 2025-03-11 01:03:57 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:00.886224 | orchestrator | 2025-03-11 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:00.886313 | orchestrator | 2025-03-11 01:04:00 | INFO  | Task f7ec44d3-5eed-4b9e-a1c5-b229d147302d is in state SUCCESS 2025-03-11 01:04:03.938346 | orchestrator | 2025-03-11 01:04:00 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:03.938463 | orchestrator | 2025-03-11 01:04:00 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:03.938482 | orchestrator | 2025-03-11 01:04:00 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:03.938498 | orchestrator | 2025-03-11 01:04:00 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:03.938512 | orchestrator | 2025-03-11 01:04:00 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:03.938527 | orchestrator | 2025-03-11 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:03.938559 | orchestrator | 2025-03-11 01:04:03 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:03.942279 | orchestrator | 2025-03-11 01:04:03 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:03.947494 | orchestrator | 2025-03-11 01:04:03 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:07.046433 | orchestrator | 2025-03-11 01:04:03 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:07.046553 | orchestrator | 2025-03-11 01:04:03 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:07.046571 | orchestrator | 2025-03-11 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:07.046603 | orchestrator | 2025-03-11 01:04:07 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:07.049579 | orchestrator | 2025-03-11 01:04:07 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:07.057301 | orchestrator | 2025-03-11 01:04:07 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:07.064178 | orchestrator | 2025-03-11 01:04:07 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:10.152201 | orchestrator | 2025-03-11 01:04:07 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:10.152322 | orchestrator | 2025-03-11 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:10.152359 | orchestrator | 2025-03-11 01:04:10 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:10.154138 | orchestrator | 2025-03-11 01:04:10 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:10.154178 | orchestrator | 2025-03-11 01:04:10 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:10.157973 | orchestrator | 2025-03-11 01:04:10 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:10.161251 | orchestrator | 2025-03-11 01:04:10 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:13.273516 | orchestrator | 2025-03-11 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:13.273758 | orchestrator | 2025-03-11 01:04:13 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:13.277469 | orchestrator | 2025-03-11 01:04:13 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:13.277558 | orchestrator | 2025-03-11 01:04:13 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:13.279564 | orchestrator | 2025-03-11 01:04:13 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:13.282210 | orchestrator | 2025-03-11 01:04:13 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:13.289366 | orchestrator | 2025-03-11 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:16.356548 | orchestrator | 2025-03-11 01:04:16 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:16.361112 | orchestrator | 2025-03-11 01:04:16 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:16.364265 | orchestrator | 2025-03-11 01:04:16 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:16.371950 | orchestrator | 2025-03-11 01:04:16 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:16.373043 | orchestrator | 2025-03-11 01:04:16 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:16.373122 | orchestrator | 2025-03-11 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:19.487929 | orchestrator | 2025-03-11 01:04:19 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:19.488414 | orchestrator | 2025-03-11 01:04:19 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:19.488462 | orchestrator | 2025-03-11 01:04:19 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:19.493109 | orchestrator | 2025-03-11 01:04:19 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:19.502703 | orchestrator | 2025-03-11 01:04:19 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:22.613985 | orchestrator | 2025-03-11 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:22.614174 | orchestrator | 2025-03-11 01:04:22 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:22.617390 | orchestrator | 2025-03-11 01:04:22 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:22.626180 | orchestrator | 2025-03-11 01:04:22 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:22.636511 | orchestrator | 2025-03-11 01:04:22 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:25.715573 | orchestrator | 2025-03-11 01:04:22 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:25.715683 | orchestrator | 2025-03-11 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:25.715768 | orchestrator | 2025-03-11 01:04:25 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:25.717431 | orchestrator | 2025-03-11 01:04:25 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:25.718765 | orchestrator | 2025-03-11 01:04:25 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:25.721836 | orchestrator | 2025-03-11 01:04:25 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:25.725011 | orchestrator | 2025-03-11 01:04:25 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:28.787337 | orchestrator | 2025-03-11 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:28.787478 | orchestrator | 2025-03-11 01:04:28 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:28.788037 | orchestrator | 2025-03-11 01:04:28 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:28.788081 | orchestrator | 2025-03-11 01:04:28 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:28.788968 | orchestrator | 2025-03-11 01:04:28 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:28.791888 | orchestrator | 2025-03-11 01:04:28 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:31.872066 | orchestrator | 2025-03-11 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:31.872271 | orchestrator | 2025-03-11 01:04:31 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:31.872394 | orchestrator | 2025-03-11 01:04:31 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:31.875584 | orchestrator | 2025-03-11 01:04:31 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:31.880527 | orchestrator | 2025-03-11 01:04:31 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:31.883634 | orchestrator | 2025-03-11 01:04:31 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:35.001410 | orchestrator | 2025-03-11 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:35.001534 | orchestrator | 2025-03-11 01:04:34 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:35.001766 | orchestrator | 2025-03-11 01:04:34 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:35.001812 | orchestrator | 2025-03-11 01:04:34 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:35.001834 | orchestrator | 2025-03-11 01:04:34 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:35.017811 | orchestrator | 2025-03-11 01:04:35 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:38.107986 | orchestrator | 2025-03-11 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:38.108125 | orchestrator | 2025-03-11 01:04:38 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:38.113552 | orchestrator | 2025-03-11 01:04:38 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:38.113597 | orchestrator | 2025-03-11 01:04:38 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:38.114924 | orchestrator | 2025-03-11 01:04:38 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:38.117132 | orchestrator | 2025-03-11 01:04:38 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:38.117394 | orchestrator | 2025-03-11 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:41.214605 | orchestrator | 2025-03-11 01:04:41 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:41.222693 | orchestrator | 2025-03-11 01:04:41 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:41.226109 | orchestrator | 2025-03-11 01:04:41 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state STARTED 2025-03-11 01:04:41.227166 | orchestrator | 2025-03-11 01:04:41 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:41.228126 | orchestrator | 2025-03-11 01:04:41 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:41.228289 | orchestrator | 2025-03-11 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:44.297326 | orchestrator | 2025-03-11 01:04:44 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:44.298399 | orchestrator | 2025-03-11 01:04:44 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:44.303677 | orchestrator | 2025-03-11 01:04:44.303732 | orchestrator | 2025-03-11 01:04:44.303776 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-03-11 01:04:44.303794 | orchestrator | 2025-03-11 01:04:44.303809 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-03-11 01:04:44.303825 | orchestrator | Tuesday 11 March 2025 01:03:09 +0000 (0:00:00.866) 0:00:00.866 ********* 2025-03-11 01:04:44.303840 | orchestrator | ok: [testbed-manager] => { 2025-03-11 01:04:44.303857 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-03-11 01:04:44.303873 | orchestrator | } 2025-03-11 01:04:44.303888 | orchestrator | 2025-03-11 01:04:44.303903 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-03-11 01:04:44.303917 | orchestrator | Tuesday 11 March 2025 01:03:09 +0000 (0:00:00.885) 0:00:01.751 ********* 2025-03-11 01:04:44.303980 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.304027 | orchestrator | 2025-03-11 01:04:44.304043 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-03-11 01:04:44.304057 | orchestrator | Tuesday 11 March 2025 01:03:12 +0000 (0:00:02.163) 0:00:03.915 ********* 2025-03-11 01:04:44.304071 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-03-11 01:04:44.304085 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-03-11 01:04:44.304100 | orchestrator | 2025-03-11 01:04:44.304113 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-03-11 01:04:44.304127 | orchestrator | Tuesday 11 March 2025 01:03:14 +0000 (0:00:02.210) 0:00:06.125 ********* 2025-03-11 01:04:44.304141 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.304155 | orchestrator | 2025-03-11 01:04:44.304169 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-03-11 01:04:44.304183 | orchestrator | Tuesday 11 March 2025 01:03:18 +0000 (0:00:03.892) 0:00:10.017 ********* 2025-03-11 01:04:44.304197 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.304211 | orchestrator | 2025-03-11 01:04:44.304225 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-03-11 01:04:44.304239 | orchestrator | Tuesday 11 March 2025 01:03:20 +0000 (0:00:02.617) 0:00:12.635 ********* 2025-03-11 01:04:44.304253 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-03-11 01:04:44.304270 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.304286 | orchestrator | 2025-03-11 01:04:44.304301 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-03-11 01:04:44.304317 | orchestrator | Tuesday 11 March 2025 01:03:47 +0000 (0:00:26.952) 0:00:39.587 ********* 2025-03-11 01:04:44.304333 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.304348 | orchestrator | 2025-03-11 01:04:44.304370 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:04:44.304386 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.304406 | orchestrator | 2025-03-11 01:04:44.304421 | orchestrator | 2025-03-11 01:04:44.304436 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:04:44.304453 | orchestrator | Tuesday 11 March 2025 01:03:51 +0000 (0:00:03.296) 0:00:42.883 ********* 2025-03-11 01:04:44.304469 | orchestrator | =============================================================================== 2025-03-11 01:04:44.304485 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.95s 2025-03-11 01:04:44.304500 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.89s 2025-03-11 01:04:44.304516 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.30s 2025-03-11 01:04:44.304532 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.62s 2025-03-11 01:04:44.304548 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.21s 2025-03-11 01:04:44.304564 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.16s 2025-03-11 01:04:44.304579 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.89s 2025-03-11 01:04:44.304594 | orchestrator | 2025-03-11 01:04:44.304610 | orchestrator | 2025-03-11 01:04:44.304624 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-03-11 01:04:44.304638 | orchestrator | 2025-03-11 01:04:44.304652 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-03-11 01:04:44.304666 | orchestrator | Tuesday 11 March 2025 01:03:07 +0000 (0:00:00.637) 0:00:00.637 ********* 2025-03-11 01:04:44.304679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-03-11 01:04:44.304695 | orchestrator | 2025-03-11 01:04:44.304709 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-03-11 01:04:44.304723 | orchestrator | Tuesday 11 March 2025 01:03:07 +0000 (0:00:00.562) 0:00:01.200 ********* 2025-03-11 01:04:44.304764 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-03-11 01:04:44.304779 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-03-11 01:04:44.304793 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-03-11 01:04:44.304807 | orchestrator | 2025-03-11 01:04:44.304821 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-03-11 01:04:44.304834 | orchestrator | Tuesday 11 March 2025 01:03:10 +0000 (0:00:02.503) 0:00:03.703 ********* 2025-03-11 01:04:44.304848 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.304862 | orchestrator | 2025-03-11 01:04:44.304876 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-03-11 01:04:44.304890 | orchestrator | Tuesday 11 March 2025 01:03:12 +0000 (0:00:02.620) 0:00:06.324 ********* 2025-03-11 01:04:44.304915 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-03-11 01:04:44.304930 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.304944 | orchestrator | 2025-03-11 01:04:44.304959 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-03-11 01:04:44.304973 | orchestrator | Tuesday 11 March 2025 01:03:50 +0000 (0:00:37.407) 0:00:43.732 ********* 2025-03-11 01:04:44.304987 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.305001 | orchestrator | 2025-03-11 01:04:44.305015 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-03-11 01:04:44.305028 | orchestrator | Tuesday 11 March 2025 01:03:51 +0000 (0:00:01.481) 0:00:45.213 ********* 2025-03-11 01:04:44.305042 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.305056 | orchestrator | 2025-03-11 01:04:44.305070 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-03-11 01:04:44.305084 | orchestrator | Tuesday 11 March 2025 01:03:52 +0000 (0:00:01.169) 0:00:46.383 ********* 2025-03-11 01:04:44.305098 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.305112 | orchestrator | 2025-03-11 01:04:44.305126 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-03-11 01:04:44.305140 | orchestrator | Tuesday 11 March 2025 01:03:56 +0000 (0:00:03.316) 0:00:49.699 ********* 2025-03-11 01:04:44.305154 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.305168 | orchestrator | 2025-03-11 01:04:44.305182 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-03-11 01:04:44.305196 | orchestrator | Tuesday 11 March 2025 01:03:57 +0000 (0:00:01.460) 0:00:51.160 ********* 2025-03-11 01:04:44.305210 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.305223 | orchestrator | 2025-03-11 01:04:44.305237 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-03-11 01:04:44.305257 | orchestrator | Tuesday 11 March 2025 01:03:58 +0000 (0:00:01.111) 0:00:52.272 ********* 2025-03-11 01:04:44.305271 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.305285 | orchestrator | 2025-03-11 01:04:44.305299 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:04:44.305313 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.305327 | orchestrator | 2025-03-11 01:04:44.305341 | orchestrator | 2025-03-11 01:04:44.305355 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:04:44.305369 | orchestrator | Tuesday 11 March 2025 01:03:59 +0000 (0:00:00.645) 0:00:52.917 ********* 2025-03-11 01:04:44.305383 | orchestrator | =============================================================================== 2025-03-11 01:04:44.305397 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.41s 2025-03-11 01:04:44.305410 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.32s 2025-03-11 01:04:44.305424 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.62s 2025-03-11 01:04:44.305445 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.50s 2025-03-11 01:04:44.305458 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.48s 2025-03-11 01:04:44.305472 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.46s 2025-03-11 01:04:44.305486 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.17s 2025-03-11 01:04:44.305500 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.11s 2025-03-11 01:04:44.305514 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.65s 2025-03-11 01:04:44.305528 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.56s 2025-03-11 01:04:44.305541 | orchestrator | 2025-03-11 01:04:44.305555 | orchestrator | 2025-03-11 01:04:44.305569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:04:44.305583 | orchestrator | 2025-03-11 01:04:44.305597 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:04:44.305611 | orchestrator | Tuesday 11 March 2025 01:03:06 +0000 (0:00:00.348) 0:00:00.348 ********* 2025-03-11 01:04:44.305625 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-03-11 01:04:44.305639 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-03-11 01:04:44.305653 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-03-11 01:04:44.305666 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-03-11 01:04:44.305680 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-03-11 01:04:44.305694 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-03-11 01:04:44.305708 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-03-11 01:04:44.305722 | orchestrator | 2025-03-11 01:04:44.305736 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-03-11 01:04:44.305815 | orchestrator | 2025-03-11 01:04:44.305831 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-03-11 01:04:44.305845 | orchestrator | Tuesday 11 March 2025 01:03:10 +0000 (0:00:03.177) 0:00:03.526 ********* 2025-03-11 01:04:44.305873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:04:44.305889 | orchestrator | 2025-03-11 01:04:44.305901 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-03-11 01:04:44.305914 | orchestrator | Tuesday 11 March 2025 01:03:13 +0000 (0:00:03.741) 0:00:07.267 ********* 2025-03-11 01:04:44.305926 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:04:44.305938 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:04:44.305951 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.305963 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:04:44.305975 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:04:44.305993 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:04:44.306006 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:04:44.306084 | orchestrator | 2025-03-11 01:04:44.306098 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-03-11 01:04:44.306111 | orchestrator | Tuesday 11 March 2025 01:03:17 +0000 (0:00:04.013) 0:00:11.281 ********* 2025-03-11 01:04:44.306123 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:04:44.306136 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:04:44.306148 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:04:44.306160 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:04:44.306173 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:04:44.306185 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.306198 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:04:44.306210 | orchestrator | 2025-03-11 01:04:44.306223 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-03-11 01:04:44.306235 | orchestrator | Tuesday 11 March 2025 01:03:22 +0000 (0:00:04.215) 0:00:15.496 ********* 2025-03-11 01:04:44.306254 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:04:44.306267 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.306289 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:04:44.306302 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:04:44.306315 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:04:44.306327 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:04:44.306339 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:04:44.306352 | orchestrator | 2025-03-11 01:04:44.306364 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-03-11 01:04:44.306377 | orchestrator | Tuesday 11 March 2025 01:03:26 +0000 (0:00:04.036) 0:00:19.533 ********* 2025-03-11 01:04:44.306389 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:04:44.306401 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:04:44.306413 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.306425 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:04:44.306438 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:04:44.306450 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:04:44.306462 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:04:44.306474 | orchestrator | 2025-03-11 01:04:44.306487 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-03-11 01:04:44.306499 | orchestrator | Tuesday 11 March 2025 01:03:37 +0000 (0:00:11.046) 0:00:30.579 ********* 2025-03-11 01:04:44.306512 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:04:44.306524 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:04:44.306536 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:04:44.306548 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:04:44.306560 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:04:44.306573 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:04:44.306585 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.306597 | orchestrator | 2025-03-11 01:04:44.306614 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-03-11 01:04:44.306626 | orchestrator | Tuesday 11 March 2025 01:03:59 +0000 (0:00:22.323) 0:00:52.903 ********* 2025-03-11 01:04:44.306640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:04:44.306656 | orchestrator | 2025-03-11 01:04:44.306669 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-03-11 01:04:44.306681 | orchestrator | Tuesday 11 March 2025 01:04:02 +0000 (0:00:02.811) 0:00:55.715 ********* 2025-03-11 01:04:44.306693 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-03-11 01:04:44.306706 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-03-11 01:04:44.306718 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-03-11 01:04:44.306731 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-03-11 01:04:44.306758 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-03-11 01:04:44.306771 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-03-11 01:04:44.306783 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-03-11 01:04:44.306796 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-03-11 01:04:44.306808 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-03-11 01:04:44.306820 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-03-11 01:04:44.306833 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-03-11 01:04:44.306845 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-03-11 01:04:44.306857 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-03-11 01:04:44.306869 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-03-11 01:04:44.306882 | orchestrator | 2025-03-11 01:04:44.306894 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-03-11 01:04:44.306907 | orchestrator | Tuesday 11 March 2025 01:04:13 +0000 (0:00:11.389) 0:01:07.105 ********* 2025-03-11 01:04:44.306925 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.306938 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:04:44.306950 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:04:44.306962 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:04:44.306974 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:04:44.306986 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:04:44.306999 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:04:44.307011 | orchestrator | 2025-03-11 01:04:44.307023 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-03-11 01:04:44.307035 | orchestrator | Tuesday 11 March 2025 01:04:17 +0000 (0:00:03.607) 0:01:10.713 ********* 2025-03-11 01:04:44.307047 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:04:44.307060 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.307072 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:04:44.307084 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:04:44.307096 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:04:44.307108 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:04:44.307121 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:04:44.307133 | orchestrator | 2025-03-11 01:04:44.307145 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-03-11 01:04:44.307165 | orchestrator | Tuesday 11 March 2025 01:04:22 +0000 (0:00:05.141) 0:01:15.854 ********* 2025-03-11 01:04:44.307178 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:04:44.307190 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:04:44.307203 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.307215 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:04:44.307227 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:04:44.307239 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:04:44.307252 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:04:44.307264 | orchestrator | 2025-03-11 01:04:44.307276 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-03-11 01:04:44.307289 | orchestrator | Tuesday 11 March 2025 01:04:25 +0000 (0:00:02.720) 0:01:18.575 ********* 2025-03-11 01:04:44.307301 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:04:44.307313 | orchestrator | ok: [testbed-manager] 2025-03-11 01:04:44.307326 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:04:44.307338 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:04:44.307350 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:04:44.307363 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:04:44.307375 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:04:44.307387 | orchestrator | 2025-03-11 01:04:44.307400 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-03-11 01:04:44.307412 | orchestrator | Tuesday 11 March 2025 01:04:28 +0000 (0:00:03.535) 0:01:22.111 ********* 2025-03-11 01:04:44.307425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-03-11 01:04:44.307439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:04:44.307451 | orchestrator | 2025-03-11 01:04:44.307464 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-03-11 01:04:44.307476 | orchestrator | Tuesday 11 March 2025 01:04:31 +0000 (0:00:02.989) 0:01:25.100 ********* 2025-03-11 01:04:44.307489 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.307501 | orchestrator | 2025-03-11 01:04:44.307513 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-03-11 01:04:44.307526 | orchestrator | Tuesday 11 March 2025 01:04:36 +0000 (0:00:04.638) 0:01:29.739 ********* 2025-03-11 01:04:44.307538 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:04:44.307551 | orchestrator | changed: [testbed-manager] 2025-03-11 01:04:44.307563 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:04:44.307576 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:04:44.307601 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:04:44.307615 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:04:44.307627 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:04:44.307640 | orchestrator | 2025-03-11 01:04:44.307652 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:04:44.307665 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.307677 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.307690 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.307707 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.307720 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.307733 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.307794 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:04:44.307808 | orchestrator | 2025-03-11 01:04:44.307821 | orchestrator | 2025-03-11 01:04:44.307834 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:04:44.307846 | orchestrator | Tuesday 11 March 2025 01:04:41 +0000 (0:00:05.335) 0:01:35.081 ********* 2025-03-11 01:04:44.307859 | orchestrator | =============================================================================== 2025-03-11 01:04:44.307871 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 22.32s 2025-03-11 01:04:44.307883 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 11.39s 2025-03-11 01:04:44.307896 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.05s 2025-03-11 01:04:44.307908 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 5.35s 2025-03-11 01:04:44.307921 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 5.14s 2025-03-11 01:04:44.307933 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 4.64s 2025-03-11 01:04:44.307946 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.22s 2025-03-11 01:04:44.307958 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.04s 2025-03-11 01:04:44.307970 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.01s 2025-03-11 01:04:44.307983 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.74s 2025-03-11 01:04:44.307996 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 3.61s 2025-03-11 01:04:44.308013 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.54s 2025-03-11 01:04:47.362709 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.18s 2025-03-11 01:04:47.362968 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.99s 2025-03-11 01:04:47.362985 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.81s 2025-03-11 01:04:47.362998 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.72s 2025-03-11 01:04:47.363010 | orchestrator | 2025-03-11 01:04:44 | INFO  | Task a648303b-889d-4de0-a7b0-a18d85b7737e is in state SUCCESS 2025-03-11 01:04:47.363022 | orchestrator | 2025-03-11 01:04:44 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state STARTED 2025-03-11 01:04:47.363034 | orchestrator | 2025-03-11 01:04:44 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:47.363071 | orchestrator | 2025-03-11 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:47.363098 | orchestrator | 2025-03-11 01:04:47 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:47.365633 | orchestrator | 2025-03-11 01:04:47 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:47.366559 | orchestrator | 2025-03-11 01:04:47 | INFO  | Task 81dfc14d-b96b-4ab3-9540-efa075b49da9 is in state SUCCESS 2025-03-11 01:04:47.373227 | orchestrator | 2025-03-11 01:04:47 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:50.425654 | orchestrator | 2025-03-11 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:50.425824 | orchestrator | 2025-03-11 01:04:50 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:50.425914 | orchestrator | 2025-03-11 01:04:50 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:50.427253 | orchestrator | 2025-03-11 01:04:50 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:53.484440 | orchestrator | 2025-03-11 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:53.484600 | orchestrator | 2025-03-11 01:04:53 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:53.492230 | orchestrator | 2025-03-11 01:04:53 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:53.492267 | orchestrator | 2025-03-11 01:04:53 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:56.542395 | orchestrator | 2025-03-11 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:56.542527 | orchestrator | 2025-03-11 01:04:56 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:56.542813 | orchestrator | 2025-03-11 01:04:56 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:56.544577 | orchestrator | 2025-03-11 01:04:56 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:04:56.544645 | orchestrator | 2025-03-11 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:04:59.637657 | orchestrator | 2025-03-11 01:04:59 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:04:59.637880 | orchestrator | 2025-03-11 01:04:59 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:04:59.640406 | orchestrator | 2025-03-11 01:04:59 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:02.716537 | orchestrator | 2025-03-11 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:02.716679 | orchestrator | 2025-03-11 01:05:02 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:02.719100 | orchestrator | 2025-03-11 01:05:02 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:02.719870 | orchestrator | 2025-03-11 01:05:02 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:02.719948 | orchestrator | 2025-03-11 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:05.768469 | orchestrator | 2025-03-11 01:05:05 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:05.769624 | orchestrator | 2025-03-11 01:05:05 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:05.770811 | orchestrator | 2025-03-11 01:05:05 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:08.852071 | orchestrator | 2025-03-11 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:08.852212 | orchestrator | 2025-03-11 01:05:08 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:08.859410 | orchestrator | 2025-03-11 01:05:08 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:08.866318 | orchestrator | 2025-03-11 01:05:08 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:11.962489 | orchestrator | 2025-03-11 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:11.962601 | orchestrator | 2025-03-11 01:05:11 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:11.963276 | orchestrator | 2025-03-11 01:05:11 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:11.972141 | orchestrator | 2025-03-11 01:05:11 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:15.007973 | orchestrator | 2025-03-11 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:15.008112 | orchestrator | 2025-03-11 01:05:15 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:15.008413 | orchestrator | 2025-03-11 01:05:15 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:15.011364 | orchestrator | 2025-03-11 01:05:15 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:15.013478 | orchestrator | 2025-03-11 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:18.074829 | orchestrator | 2025-03-11 01:05:18 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:21.130289 | orchestrator | 2025-03-11 01:05:18 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:21.130418 | orchestrator | 2025-03-11 01:05:18 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:21.130439 | orchestrator | 2025-03-11 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:21.130471 | orchestrator | 2025-03-11 01:05:21 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:21.131113 | orchestrator | 2025-03-11 01:05:21 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:21.134669 | orchestrator | 2025-03-11 01:05:21 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:24.214121 | orchestrator | 2025-03-11 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:24.214258 | orchestrator | 2025-03-11 01:05:24 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:24.214347 | orchestrator | 2025-03-11 01:05:24 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:24.217460 | orchestrator | 2025-03-11 01:05:24 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:27.282388 | orchestrator | 2025-03-11 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:27.282531 | orchestrator | 2025-03-11 01:05:27 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:27.282867 | orchestrator | 2025-03-11 01:05:27 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:27.286085 | orchestrator | 2025-03-11 01:05:27 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:30.335543 | orchestrator | 2025-03-11 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:30.335689 | orchestrator | 2025-03-11 01:05:30 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:30.336554 | orchestrator | 2025-03-11 01:05:30 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:30.341691 | orchestrator | 2025-03-11 01:05:30 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:33.389420 | orchestrator | 2025-03-11 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:33.389553 | orchestrator | 2025-03-11 01:05:33 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:33.389922 | orchestrator | 2025-03-11 01:05:33 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:33.392381 | orchestrator | 2025-03-11 01:05:33 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:33.392456 | orchestrator | 2025-03-11 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:36.436008 | orchestrator | 2025-03-11 01:05:36 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:36.436569 | orchestrator | 2025-03-11 01:05:36 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:36.439466 | orchestrator | 2025-03-11 01:05:36 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:39.487136 | orchestrator | 2025-03-11 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:39.487270 | orchestrator | 2025-03-11 01:05:39 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:39.488889 | orchestrator | 2025-03-11 01:05:39 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:39.490380 | orchestrator | 2025-03-11 01:05:39 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:42.536024 | orchestrator | 2025-03-11 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:42.536162 | orchestrator | 2025-03-11 01:05:42 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:42.537615 | orchestrator | 2025-03-11 01:05:42 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:42.538623 | orchestrator | 2025-03-11 01:05:42 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:45.586165 | orchestrator | 2025-03-11 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:45.586323 | orchestrator | 2025-03-11 01:05:45 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:45.586418 | orchestrator | 2025-03-11 01:05:45 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:45.586929 | orchestrator | 2025-03-11 01:05:45 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:45.587312 | orchestrator | 2025-03-11 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:48.644481 | orchestrator | 2025-03-11 01:05:48 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:48.654977 | orchestrator | 2025-03-11 01:05:48 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:51.696270 | orchestrator | 2025-03-11 01:05:48 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:51.696393 | orchestrator | 2025-03-11 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:51.696459 | orchestrator | 2025-03-11 01:05:51 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:51.696538 | orchestrator | 2025-03-11 01:05:51 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:51.699025 | orchestrator | 2025-03-11 01:05:51 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:54.739027 | orchestrator | 2025-03-11 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:54.739159 | orchestrator | 2025-03-11 01:05:54 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:54.739249 | orchestrator | 2025-03-11 01:05:54 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state STARTED 2025-03-11 01:05:54.740090 | orchestrator | 2025-03-11 01:05:54 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:05:54.740381 | orchestrator | 2025-03-11 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:05:57.790401 | orchestrator | 2025-03-11 01:05:57 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:05:57.793517 | orchestrator | 2025-03-11 01:05:57 | INFO  | Task a9c76b30-ccc6-4a44-b6bd-48a7a2f49ab2 is in state SUCCESS 2025-03-11 01:05:57.795290 | orchestrator | 2025-03-11 01:05:57.795348 | orchestrator | 2025-03-11 01:05:57.795363 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-03-11 01:05:57.795379 | orchestrator | 2025-03-11 01:05:57.795394 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-03-11 01:05:57.795408 | orchestrator | Tuesday 11 March 2025 01:03:35 +0000 (0:00:00.231) 0:00:00.231 ********* 2025-03-11 01:05:57.795422 | orchestrator | ok: [testbed-manager] 2025-03-11 01:05:57.795438 | orchestrator | 2025-03-11 01:05:57.795452 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-03-11 01:05:57.795466 | orchestrator | Tuesday 11 March 2025 01:03:37 +0000 (0:00:01.347) 0:00:01.579 ********* 2025-03-11 01:05:57.795481 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-03-11 01:05:57.795495 | orchestrator | 2025-03-11 01:05:57.795509 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-03-11 01:05:57.795523 | orchestrator | Tuesday 11 March 2025 01:03:39 +0000 (0:00:01.924) 0:00:03.503 ********* 2025-03-11 01:05:57.795537 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.795551 | orchestrator | 2025-03-11 01:05:57.795565 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-03-11 01:05:57.795579 | orchestrator | Tuesday 11 March 2025 01:03:42 +0000 (0:00:03.084) 0:00:06.588 ********* 2025-03-11 01:05:57.795593 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-03-11 01:05:57.795607 | orchestrator | ok: [testbed-manager] 2025-03-11 01:05:57.795621 | orchestrator | 2025-03-11 01:05:57.795634 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-03-11 01:05:57.795648 | orchestrator | Tuesday 11 March 2025 01:04:38 +0000 (0:00:56.609) 0:01:03.198 ********* 2025-03-11 01:05:57.795662 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.795676 | orchestrator | 2025-03-11 01:05:57.795689 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:05:57.795703 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:05:57.795719 | orchestrator | 2025-03-11 01:05:57.795733 | orchestrator | 2025-03-11 01:05:57.795747 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:05:57.795761 | orchestrator | Tuesday 11 March 2025 01:04:43 +0000 (0:00:04.717) 0:01:07.915 ********* 2025-03-11 01:05:57.795775 | orchestrator | =============================================================================== 2025-03-11 01:05:57.795809 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.61s 2025-03-11 01:05:57.795823 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.72s 2025-03-11 01:05:57.795837 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.08s 2025-03-11 01:05:57.795877 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.92s 2025-03-11 01:05:57.795893 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.35s 2025-03-11 01:05:57.795909 | orchestrator | 2025-03-11 01:05:57.795926 | orchestrator | 2025-03-11 01:05:57.795941 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-03-11 01:05:57.795956 | orchestrator | 2025-03-11 01:05:57.795971 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-11 01:05:57.795988 | orchestrator | Tuesday 11 March 2025 01:03:01 +0000 (0:00:00.434) 0:00:00.434 ********* 2025-03-11 01:05:57.796004 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:05:57.796021 | orchestrator | 2025-03-11 01:05:57.796036 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-03-11 01:05:57.796052 | orchestrator | Tuesday 11 March 2025 01:03:03 +0000 (0:00:01.852) 0:00:02.286 ********* 2025-03-11 01:05:57.796067 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:05:57.796083 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:05:57.796116 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:05:57.796144 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:05:57.796171 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:05:57.796199 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:05:57.796227 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:05:57.796252 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:05:57.796277 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:05:57.796301 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:05:57.796325 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-11 01:05:57.796349 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:05:57.796375 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:05:57.796401 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:05:57.796425 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:05:57.796449 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:05:57.796489 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-11 01:05:57.796514 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:05:57.796528 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:05:57.796542 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:05:57.796556 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-11 01:05:57.796570 | orchestrator | 2025-03-11 01:05:57.796583 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-11 01:05:57.796608 | orchestrator | Tuesday 11 March 2025 01:03:08 +0000 (0:00:05.530) 0:00:07.817 ********* 2025-03-11 01:05:57.796623 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:05:57.796644 | orchestrator | 2025-03-11 01:05:57.796658 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-03-11 01:05:57.796672 | orchestrator | Tuesday 11 March 2025 01:03:11 +0000 (0:00:02.783) 0:00:10.601 ********* 2025-03-11 01:05:57.796690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.796708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.796723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.796737 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.796752 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.796766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.796788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796840 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.796960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.796990 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.797005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.797019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.797037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.797051 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.797066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.797080 | orchestrator | 2025-03-11 01:05:57.797095 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-03-11 01:05:57.797109 | orchestrator | Tuesday 11 March 2025 01:03:19 +0000 (0:00:07.816) 0:00:18.417 ********* 2025-03-11 01:05:57.797149 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797164 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797179 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797194 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:05:57.797208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797313 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:05:57.797327 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:05:57.797341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797434 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:05:57.797448 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:05:57.797463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797513 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:05:57.797527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797570 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:05:57.797584 | orchestrator | 2025-03-11 01:05:57.797598 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-03-11 01:05:57.797612 | orchestrator | Tuesday 11 March 2025 01:03:23 +0000 (0:00:03.573) 0:00:21.990 ********* 2025-03-11 01:05:57.797626 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.797646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.797670 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798228 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:05:57.798259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.798275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798305 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:05:57.798320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.798335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.798399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798427 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:05:57.798439 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:05:57.798452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.798465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798496 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:05:57.798513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.798526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798568 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:05:57.798581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-11 01:05:57.798594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.798620 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:05:57.798633 | orchestrator | 2025-03-11 01:05:57.798646 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-03-11 01:05:57.798658 | orchestrator | Tuesday 11 March 2025 01:03:27 +0000 (0:00:04.815) 0:00:26.806 ********* 2025-03-11 01:05:57.798671 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:05:57.798686 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:05:57.798705 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:05:57.798720 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:05:57.798734 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:05:57.798747 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:05:57.798761 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:05:57.798775 | orchestrator | 2025-03-11 01:05:57.798789 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-03-11 01:05:57.798803 | orchestrator | Tuesday 11 March 2025 01:03:29 +0000 (0:00:01.632) 0:00:28.438 ********* 2025-03-11 01:05:57.798819 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:05:57.798841 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:05:57.798883 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:05:57.798905 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:05:57.798927 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:05:57.798948 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:05:57.798963 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:05:57.798977 | orchestrator | 2025-03-11 01:05:57.798991 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-03-11 01:05:57.799005 | orchestrator | Tuesday 11 March 2025 01:03:31 +0000 (0:00:01.455) 0:00:29.894 ********* 2025-03-11 01:05:57.799019 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:05:57.799033 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.799046 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.799058 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.799070 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.799082 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.799094 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.799106 | orchestrator | 2025-03-11 01:05:57.799118 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-03-11 01:05:57.799131 | orchestrator | Tuesday 11 March 2025 01:04:04 +0000 (0:00:33.365) 0:01:03.260 ********* 2025-03-11 01:05:57.799143 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:05:57.799155 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:05:57.799167 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:05:57.799179 | orchestrator | ok: [testbed-manager] 2025-03-11 01:05:57.799191 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:05:57.799203 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:05:57.799216 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:05:57.799228 | orchestrator | 2025-03-11 01:05:57.799240 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-11 01:05:57.799252 | orchestrator | Tuesday 11 March 2025 01:04:09 +0000 (0:00:05.254) 0:01:08.515 ********* 2025-03-11 01:05:57.799265 | orchestrator | ok: [testbed-manager] 2025-03-11 01:05:57.799283 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:05:57.799296 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:05:57.799308 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:05:57.799320 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:05:57.799332 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:05:57.799344 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:05:57.799356 | orchestrator | 2025-03-11 01:05:57.799369 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-03-11 01:05:57.799381 | orchestrator | Tuesday 11 March 2025 01:04:11 +0000 (0:00:02.273) 0:01:10.788 ********* 2025-03-11 01:05:57.799394 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:05:57.799406 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:05:57.799419 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:05:57.799431 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:05:57.799450 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:05:57.799463 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:05:57.799475 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:05:57.799488 | orchestrator | 2025-03-11 01:05:57.799500 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-11 01:05:57.799513 | orchestrator | Tuesday 11 March 2025 01:04:14 +0000 (0:00:02.156) 0:01:12.946 ********* 2025-03-11 01:05:57.799533 | orchestrator | skipping: [testbed-manager] 2025-03-11 01:05:57.799545 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:05:57.799557 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:05:57.799570 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:05:57.799582 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:05:57.799595 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:05:57.799607 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:05:57.799619 | orchestrator | 2025-03-11 01:05:57.799631 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-03-11 01:05:57.799644 | orchestrator | Tuesday 11 March 2025 01:04:15 +0000 (0:00:01.486) 0:01:14.432 ********* 2025-03-11 01:05:57.799656 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.799670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.799683 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.799713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799726 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.799769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.799808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.799873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.799902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.799992 | orchestrator | 2025-03-11 01:05:57.800005 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-03-11 01:05:57.800022 | orchestrator | Tuesday 11 March 2025 01:04:25 +0000 (0:00:09.457) 0:01:23.889 ********* 2025-03-11 01:05:57.800035 | orchestrator | [WARNING]: Skipped 2025-03-11 01:05:57.800048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-03-11 01:05:57.800060 | orchestrator | to this access issue: 2025-03-11 01:05:57.800073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-03-11 01:05:57.800085 | orchestrator | directory 2025-03-11 01:05:57.800097 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:05:57.800110 | orchestrator | 2025-03-11 01:05:57.800122 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-03-11 01:05:57.800135 | orchestrator | Tuesday 11 March 2025 01:04:26 +0000 (0:00:01.760) 0:01:25.650 ********* 2025-03-11 01:05:57.800147 | orchestrator | [WARNING]: Skipped 2025-03-11 01:05:57.800164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-03-11 01:05:57.800176 | orchestrator | to this access issue: 2025-03-11 01:05:57.800189 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-03-11 01:05:57.800201 | orchestrator | directory 2025-03-11 01:05:57.800214 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:05:57.800226 | orchestrator | 2025-03-11 01:05:57.800239 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-03-11 01:05:57.800251 | orchestrator | Tuesday 11 March 2025 01:04:28 +0000 (0:00:01.523) 0:01:27.173 ********* 2025-03-11 01:05:57.800264 | orchestrator | [WARNING]: Skipped 2025-03-11 01:05:57.800276 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-03-11 01:05:57.800289 | orchestrator | to this access issue: 2025-03-11 01:05:57.800301 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-03-11 01:05:57.800314 | orchestrator | directory 2025-03-11 01:05:57.800326 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:05:57.800339 | orchestrator | 2025-03-11 01:05:57.800351 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-03-11 01:05:57.800363 | orchestrator | Tuesday 11 March 2025 01:04:29 +0000 (0:00:00.722) 0:01:27.895 ********* 2025-03-11 01:05:57.800376 | orchestrator | [WARNING]: Skipped 2025-03-11 01:05:57.800388 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-03-11 01:05:57.800400 | orchestrator | to this access issue: 2025-03-11 01:05:57.800413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-03-11 01:05:57.800425 | orchestrator | directory 2025-03-11 01:05:57.800438 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-11 01:05:57.800450 | orchestrator | 2025-03-11 01:05:57.800463 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-03-11 01:05:57.800475 | orchestrator | Tuesday 11 March 2025 01:04:29 +0000 (0:00:00.918) 0:01:28.814 ********* 2025-03-11 01:05:57.800487 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.800500 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:05:57.800513 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.800525 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.800537 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.800550 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.800562 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.800574 | orchestrator | 2025-03-11 01:05:57.800587 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-03-11 01:05:57.800599 | orchestrator | Tuesday 11 March 2025 01:04:36 +0000 (0:00:06.269) 0:01:35.084 ********* 2025-03-11 01:05:57.800612 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:05:57.800624 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:05:57.800637 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:05:57.800658 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:05:57.800671 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:05:57.800684 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:05:57.800696 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-11 01:05:57.800708 | orchestrator | 2025-03-11 01:05:57.800720 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-03-11 01:05:57.800733 | orchestrator | Tuesday 11 March 2025 01:04:42 +0000 (0:00:05.987) 0:01:41.071 ********* 2025-03-11 01:05:57.800746 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.800758 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:05:57.800770 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.800783 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.800795 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.800807 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.800820 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.800832 | orchestrator | 2025-03-11 01:05:57.800889 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-03-11 01:05:57.800905 | orchestrator | Tuesday 11 March 2025 01:04:47 +0000 (0:00:04.879) 0:01:45.951 ********* 2025-03-11 01:05:57.800918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.800937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.800951 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.800969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.800982 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.801044 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801055 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801066 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801081 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.801106 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801117 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.801144 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.801165 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801181 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801192 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:05:57.801218 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801229 | orchestrator | 2025-03-11 01:05:57.801239 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-03-11 01:05:57.801249 | orchestrator | Tuesday 11 March 2025 01:04:50 +0000 (0:00:03.756) 0:01:49.708 ********* 2025-03-11 01:05:57.801260 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:05:57.801270 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:05:57.801281 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:05:57.801291 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:05:57.801301 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:05:57.801311 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:05:57.801322 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-11 01:05:57.801332 | orchestrator | 2025-03-11 01:05:57.801342 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-03-11 01:05:57.801356 | orchestrator | Tuesday 11 March 2025 01:04:53 +0000 (0:00:03.103) 0:01:52.812 ********* 2025-03-11 01:05:57.801367 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:05:57.801377 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:05:57.801387 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:05:57.801398 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:05:57.801408 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:05:57.801418 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:05:57.801428 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-11 01:05:57.801439 | orchestrator | 2025-03-11 01:05:57.801449 | orchestrator | TASK [common : Check common containers] **************************************** 2025-03-11 01:05:57.801459 | orchestrator | Tuesday 11 March 2025 01:04:57 +0000 (0:00:03.664) 0:01:56.476 ********* 2025-03-11 01:05:57.801469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801484 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801587 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-11 01:05:57.801641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801706 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:05:57.801728 | orchestrator | 2025-03-11 01:05:57.801738 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-03-11 01:05:57.801748 | orchestrator | Tuesday 11 March 2025 01:05:02 +0000 (0:00:04.886) 0:02:01.362 ********* 2025-03-11 01:05:57.801759 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.801769 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:05:57.801779 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.801790 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.801800 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.801810 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.801820 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.801830 | orchestrator | 2025-03-11 01:05:57.801841 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-03-11 01:05:57.801862 | orchestrator | Tuesday 11 March 2025 01:05:04 +0000 (0:00:02.412) 0:02:03.775 ********* 2025-03-11 01:05:57.801873 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.801883 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:05:57.801897 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.801908 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.801918 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.801928 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.801938 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.801948 | orchestrator | 2025-03-11 01:05:57.801959 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:05:57.801969 | orchestrator | Tuesday 11 March 2025 01:05:06 +0000 (0:00:01.569) 0:02:05.345 ********* 2025-03-11 01:05:57.801984 | orchestrator | 2025-03-11 01:05:57.801995 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:05:57.802005 | orchestrator | Tuesday 11 March 2025 01:05:06 +0000 (0:00:00.263) 0:02:05.609 ********* 2025-03-11 01:05:57.802048 | orchestrator | 2025-03-11 01:05:57.802061 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:05:57.802072 | orchestrator | Tuesday 11 March 2025 01:05:06 +0000 (0:00:00.060) 0:02:05.670 ********* 2025-03-11 01:05:57.802082 | orchestrator | 2025-03-11 01:05:57.802093 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:05:57.802103 | orchestrator | Tuesday 11 March 2025 01:05:06 +0000 (0:00:00.056) 0:02:05.726 ********* 2025-03-11 01:05:57.802113 | orchestrator | 2025-03-11 01:05:57.802123 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:05:57.802133 | orchestrator | Tuesday 11 March 2025 01:05:06 +0000 (0:00:00.064) 0:02:05.791 ********* 2025-03-11 01:05:57.802143 | orchestrator | 2025-03-11 01:05:57.802154 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:05:57.802164 | orchestrator | Tuesday 11 March 2025 01:05:07 +0000 (0:00:00.288) 0:02:06.079 ********* 2025-03-11 01:05:57.802174 | orchestrator | 2025-03-11 01:05:57.802184 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-11 01:05:57.802194 | orchestrator | Tuesday 11 March 2025 01:05:07 +0000 (0:00:00.065) 0:02:06.145 ********* 2025-03-11 01:05:57.802204 | orchestrator | 2025-03-11 01:05:57.802214 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-03-11 01:05:57.802229 | orchestrator | Tuesday 11 March 2025 01:05:07 +0000 (0:00:00.077) 0:02:06.223 ********* 2025-03-11 01:05:57.802240 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.802250 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.802260 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.802271 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.802281 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:05:57.802291 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.802301 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.802311 | orchestrator | 2025-03-11 01:05:57.802322 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-03-11 01:05:57.802332 | orchestrator | Tuesday 11 March 2025 01:05:17 +0000 (0:00:10.141) 0:02:16.364 ********* 2025-03-11 01:05:57.802343 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:05:57.802353 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.802363 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.802373 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.802383 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.802394 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.802404 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.802414 | orchestrator | 2025-03-11 01:05:57.802428 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-03-11 01:05:57.802438 | orchestrator | Tuesday 11 March 2025 01:05:42 +0000 (0:00:25.019) 0:02:41.383 ********* 2025-03-11 01:05:57.802449 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:05:57.802459 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:05:57.802469 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:05:57.802480 | orchestrator | ok: [testbed-manager] 2025-03-11 01:05:57.802490 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:05:57.802500 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:05:57.802510 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:05:57.802521 | orchestrator | 2025-03-11 01:05:57.802531 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-03-11 01:05:57.802541 | orchestrator | Tuesday 11 March 2025 01:05:45 +0000 (0:00:03.009) 0:02:44.393 ********* 2025-03-11 01:05:57.802552 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:05:57.802562 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:05:57.802572 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:05:57.802582 | orchestrator | changed: [testbed-manager] 2025-03-11 01:05:57.802598 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:05:57.802608 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:05:57.802618 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:05:57.802628 | orchestrator | 2025-03-11 01:05:57.802639 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:05:57.802649 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:05:57.802661 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:05:57.802671 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:05:57.802682 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:05:57.802692 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:05:57.802703 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:05:57.802713 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-11 01:05:57.802723 | orchestrator | 2025-03-11 01:05:57.802734 | orchestrator | 2025-03-11 01:05:57.802744 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:05:57.802754 | orchestrator | Tuesday 11 March 2025 01:05:56 +0000 (0:00:11.093) 0:02:55.487 ********* 2025-03-11 01:05:57.802764 | orchestrator | =============================================================================== 2025-03-11 01:05:57.802774 | orchestrator | common : Ensure fluentd image is present for label check --------------- 33.37s 2025-03-11 01:05:57.802785 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 25.02s 2025-03-11 01:05:57.802795 | orchestrator | common : Restart cron container ---------------------------------------- 11.09s 2025-03-11 01:05:57.802805 | orchestrator | common : Restart fluentd container ------------------------------------- 10.14s 2025-03-11 01:05:57.802815 | orchestrator | common : Copying over config.json files for services -------------------- 9.46s 2025-03-11 01:05:57.802825 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.82s 2025-03-11 01:05:57.802835 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 6.27s 2025-03-11 01:05:57.802857 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.99s 2025-03-11 01:05:57.802868 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.53s 2025-03-11 01:05:57.802878 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 5.25s 2025-03-11 01:05:57.802889 | orchestrator | common : Check common containers ---------------------------------------- 4.89s 2025-03-11 01:05:57.802899 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.88s 2025-03-11 01:05:57.802909 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.82s 2025-03-11 01:05:57.802923 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.76s 2025-03-11 01:06:00.857331 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.66s 2025-03-11 01:06:00.857490 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.57s 2025-03-11 01:06:00.857515 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.10s 2025-03-11 01:06:00.857542 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.01s 2025-03-11 01:06:00.857566 | orchestrator | common : include_tasks -------------------------------------------------- 2.78s 2025-03-11 01:06:00.857627 | orchestrator | common : Creating log volume -------------------------------------------- 2.41s 2025-03-11 01:06:00.857644 | orchestrator | 2025-03-11 01:05:57 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:00.857658 | orchestrator | 2025-03-11 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:00.857693 | orchestrator | 2025-03-11 01:06:00 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:00.862447 | orchestrator | 2025-03-11 01:06:00 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:00.862521 | orchestrator | 2025-03-11 01:06:00 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:00.863239 | orchestrator | 2025-03-11 01:06:00 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:00.863269 | orchestrator | 2025-03-11 01:06:00 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:00.863294 | orchestrator | 2025-03-11 01:06:00 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:00.865098 | orchestrator | 2025-03-11 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:03.914779 | orchestrator | 2025-03-11 01:06:03 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:03.918360 | orchestrator | 2025-03-11 01:06:03 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:03.920004 | orchestrator | 2025-03-11 01:06:03 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:03.920030 | orchestrator | 2025-03-11 01:06:03 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:03.920046 | orchestrator | 2025-03-11 01:06:03 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:03.920083 | orchestrator | 2025-03-11 01:06:03 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:06.963588 | orchestrator | 2025-03-11 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:06.963705 | orchestrator | 2025-03-11 01:06:06 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:06.964031 | orchestrator | 2025-03-11 01:06:06 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:06.964820 | orchestrator | 2025-03-11 01:06:06 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:06.965632 | orchestrator | 2025-03-11 01:06:06 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:06.966528 | orchestrator | 2025-03-11 01:06:06 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:06.967544 | orchestrator | 2025-03-11 01:06:06 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:10.054332 | orchestrator | 2025-03-11 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:10.054468 | orchestrator | 2025-03-11 01:06:10 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:10.057974 | orchestrator | 2025-03-11 01:06:10 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:10.058013 | orchestrator | 2025-03-11 01:06:10 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:10.061368 | orchestrator | 2025-03-11 01:06:10 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:10.064493 | orchestrator | 2025-03-11 01:06:10 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:10.064533 | orchestrator | 2025-03-11 01:06:10 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:13.141259 | orchestrator | 2025-03-11 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:13.141395 | orchestrator | 2025-03-11 01:06:13 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:13.141828 | orchestrator | 2025-03-11 01:06:13 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:13.141862 | orchestrator | 2025-03-11 01:06:13 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:13.143688 | orchestrator | 2025-03-11 01:06:13 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:13.150293 | orchestrator | 2025-03-11 01:06:13 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:13.153740 | orchestrator | 2025-03-11 01:06:13 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:16.242933 | orchestrator | 2025-03-11 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:16.243077 | orchestrator | 2025-03-11 01:06:16 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:16.243938 | orchestrator | 2025-03-11 01:06:16 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:16.245136 | orchestrator | 2025-03-11 01:06:16 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:16.246287 | orchestrator | 2025-03-11 01:06:16 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:16.249466 | orchestrator | 2025-03-11 01:06:16 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:16.250723 | orchestrator | 2025-03-11 01:06:16 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:19.342401 | orchestrator | 2025-03-11 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:19.342535 | orchestrator | 2025-03-11 01:06:19 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:19.353521 | orchestrator | 2025-03-11 01:06:19 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:19.364462 | orchestrator | 2025-03-11 01:06:19 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:19.369050 | orchestrator | 2025-03-11 01:06:19 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:19.374551 | orchestrator | 2025-03-11 01:06:19 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:19.390162 | orchestrator | 2025-03-11 01:06:19 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:19.392958 | orchestrator | 2025-03-11 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:22.503068 | orchestrator | 2025-03-11 01:06:22 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:22.509673 | orchestrator | 2025-03-11 01:06:22 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:22.511632 | orchestrator | 2025-03-11 01:06:22 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:22.512432 | orchestrator | 2025-03-11 01:06:22 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:22.516659 | orchestrator | 2025-03-11 01:06:22 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:22.522419 | orchestrator | 2025-03-11 01:06:22 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:25.598384 | orchestrator | 2025-03-11 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:25.598525 | orchestrator | 2025-03-11 01:06:25 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:25.600709 | orchestrator | 2025-03-11 01:06:25 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:25.603069 | orchestrator | 2025-03-11 01:06:25 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:25.605830 | orchestrator | 2025-03-11 01:06:25 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:25.612055 | orchestrator | 2025-03-11 01:06:25 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:25.614673 | orchestrator | 2025-03-11 01:06:25 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:28.674414 | orchestrator | 2025-03-11 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:28.674550 | orchestrator | 2025-03-11 01:06:28 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:28.675347 | orchestrator | 2025-03-11 01:06:28 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:28.677260 | orchestrator | 2025-03-11 01:06:28 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:28.681517 | orchestrator | 2025-03-11 01:06:28 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:28.685538 | orchestrator | 2025-03-11 01:06:28 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:28.688277 | orchestrator | 2025-03-11 01:06:28 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:31.745666 | orchestrator | 2025-03-11 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:31.745806 | orchestrator | 2025-03-11 01:06:31 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:31.746964 | orchestrator | 2025-03-11 01:06:31 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state STARTED 2025-03-11 01:06:31.749014 | orchestrator | 2025-03-11 01:06:31 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:31.751061 | orchestrator | 2025-03-11 01:06:31 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:31.754939 | orchestrator | 2025-03-11 01:06:31 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:31.758156 | orchestrator | 2025-03-11 01:06:31 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:34.854593 | orchestrator | 2025-03-11 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:34.854733 | orchestrator | 2025-03-11 01:06:34 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:34.856331 | orchestrator | 2025-03-11 01:06:34 | INFO  | Task c0c153fa-25a2-448b-a9c9-0208249ab952 is in state SUCCESS 2025-03-11 01:06:34.856366 | orchestrator | 2025-03-11 01:06:34 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:34.858817 | orchestrator | 2025-03-11 01:06:34 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:34.859228 | orchestrator | 2025-03-11 01:06:34 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:34.861169 | orchestrator | 2025-03-11 01:06:34 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:37.918868 | orchestrator | 2025-03-11 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:37.919114 | orchestrator | 2025-03-11 01:06:37 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:37.920187 | orchestrator | 2025-03-11 01:06:37 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:37.920226 | orchestrator | 2025-03-11 01:06:37 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:37.920403 | orchestrator | 2025-03-11 01:06:37 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:37.920436 | orchestrator | 2025-03-11 01:06:37 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:06:37.921304 | orchestrator | 2025-03-11 01:06:37 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state STARTED 2025-03-11 01:06:37.921427 | orchestrator | 2025-03-11 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:40.971104 | orchestrator | 2025-03-11 01:06:40 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:40.971367 | orchestrator | 2025-03-11 01:06:40 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:40.972016 | orchestrator | 2025-03-11 01:06:40 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:40.973132 | orchestrator | 2025-03-11 01:06:40 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:40.974257 | orchestrator | 2025-03-11 01:06:40 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:06:40.975154 | orchestrator | 2025-03-11 01:06:40 | INFO  | Task 089123a6-c715-40a7-9511-7116bf3ce461 is in state SUCCESS 2025-03-11 01:06:40.976762 | orchestrator | 2025-03-11 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:40.976822 | orchestrator | 2025-03-11 01:06:40.976839 | orchestrator | 2025-03-11 01:06:40.976855 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:06:40.976870 | orchestrator | 2025-03-11 01:06:40.976885 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:06:40.976900 | orchestrator | Tuesday 11 March 2025 01:06:03 +0000 (0:00:00.676) 0:00:00.676 ********* 2025-03-11 01:06:40.976943 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:06:40.976960 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:06:40.976975 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:06:40.976989 | orchestrator | 2025-03-11 01:06:40.977003 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:06:40.977025 | orchestrator | Tuesday 11 March 2025 01:06:04 +0000 (0:00:00.980) 0:00:01.657 ********* 2025-03-11 01:06:40.977040 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-03-11 01:06:40.977054 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-03-11 01:06:40.977068 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-03-11 01:06:40.977082 | orchestrator | 2025-03-11 01:06:40.977096 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-03-11 01:06:40.977110 | orchestrator | 2025-03-11 01:06:40.977124 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-03-11 01:06:40.977138 | orchestrator | Tuesday 11 March 2025 01:06:05 +0000 (0:00:00.663) 0:00:02.320 ********* 2025-03-11 01:06:40.977153 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:06:40.977188 | orchestrator | 2025-03-11 01:06:40.977203 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-03-11 01:06:40.977217 | orchestrator | Tuesday 11 March 2025 01:06:06 +0000 (0:00:01.724) 0:00:04.045 ********* 2025-03-11 01:06:40.977231 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-11 01:06:40.977245 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-11 01:06:40.977259 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-11 01:06:40.977273 | orchestrator | 2025-03-11 01:06:40.977287 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-03-11 01:06:40.977301 | orchestrator | Tuesday 11 March 2025 01:06:08 +0000 (0:00:01.974) 0:00:06.019 ********* 2025-03-11 01:06:40.977315 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-11 01:06:40.977329 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-11 01:06:40.977343 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-11 01:06:40.977359 | orchestrator | 2025-03-11 01:06:40.977374 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-03-11 01:06:40.977390 | orchestrator | Tuesday 11 March 2025 01:06:13 +0000 (0:00:04.657) 0:00:10.677 ********* 2025-03-11 01:06:40.977406 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:06:40.977427 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:06:40.977443 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:06:40.977458 | orchestrator | 2025-03-11 01:06:40.977473 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-03-11 01:06:40.977489 | orchestrator | Tuesday 11 March 2025 01:06:20 +0000 (0:00:07.272) 0:00:17.949 ********* 2025-03-11 01:06:40.977504 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:06:40.977520 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:06:40.977536 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:06:40.977551 | orchestrator | 2025-03-11 01:06:40.977567 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:06:40.977582 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:06:40.977600 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:06:40.977616 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:06:40.977632 | orchestrator | 2025-03-11 01:06:40.977648 | orchestrator | 2025-03-11 01:06:40.977664 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:06:40.977680 | orchestrator | Tuesday 11 March 2025 01:06:31 +0000 (0:00:10.653) 0:00:28.603 ********* 2025-03-11 01:06:40.977696 | orchestrator | =============================================================================== 2025-03-11 01:06:40.977712 | orchestrator | memcached : Restart memcached container -------------------------------- 10.65s 2025-03-11 01:06:40.977725 | orchestrator | memcached : Check memcached container ----------------------------------- 7.27s 2025-03-11 01:06:40.977739 | orchestrator | memcached : Copying over config.json files for services ----------------- 4.66s 2025-03-11 01:06:40.977753 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.97s 2025-03-11 01:06:40.977767 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.72s 2025-03-11 01:06:40.977781 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2025-03-11 01:06:40.977795 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-03-11 01:06:40.977809 | orchestrator | 2025-03-11 01:06:40.977822 | orchestrator | 2025-03-11 01:06:40.977836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:06:40.977850 | orchestrator | 2025-03-11 01:06:40.977864 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:06:40.977878 | orchestrator | Tuesday 11 March 2025 01:06:03 +0000 (0:00:00.763) 0:00:00.766 ********* 2025-03-11 01:06:40.977898 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:06:40.977931 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:06:40.977945 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:06:40.977959 | orchestrator | 2025-03-11 01:06:40.977974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:06:40.977997 | orchestrator | Tuesday 11 March 2025 01:06:04 +0000 (0:00:00.744) 0:00:01.511 ********* 2025-03-11 01:06:40.978012 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-03-11 01:06:40.978089 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-03-11 01:06:40.978110 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-03-11 01:06:40.978124 | orchestrator | 2025-03-11 01:06:40.978138 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-03-11 01:06:40.978152 | orchestrator | 2025-03-11 01:06:40.978166 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-03-11 01:06:40.978180 | orchestrator | Tuesday 11 March 2025 01:06:04 +0000 (0:00:00.509) 0:00:02.020 ********* 2025-03-11 01:06:40.978194 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:06:40.978207 | orchestrator | 2025-03-11 01:06:40.978222 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-03-11 01:06:40.978240 | orchestrator | Tuesday 11 March 2025 01:06:05 +0000 (0:00:01.234) 0:00:03.255 ********* 2025-03-11 01:06:40.978257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978369 | orchestrator | 2025-03-11 01:06:40.978383 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-03-11 01:06:40.978397 | orchestrator | Tuesday 11 March 2025 01:06:08 +0000 (0:00:02.173) 0:00:05.429 ********* 2025-03-11 01:06:40.978412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978514 | orchestrator | 2025-03-11 01:06:40.978528 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-03-11 01:06:40.978542 | orchestrator | Tuesday 11 March 2025 01:06:13 +0000 (0:00:05.715) 0:00:11.145 ********* 2025-03-11 01:06:40.978556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978650 | orchestrator | 2025-03-11 01:06:40.978669 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-03-11 01:06:40.978684 | orchestrator | Tuesday 11 March 2025 01:06:21 +0000 (0:00:08.148) 0:00:19.293 ********* 2025-03-11 01:06:40.978698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-11 01:06:40.978798 | orchestrator | 2025-03-11 01:06:40.978812 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-11 01:06:40.978826 | orchestrator | Tuesday 11 March 2025 01:06:26 +0000 (0:00:04.226) 0:00:23.520 ********* 2025-03-11 01:06:40.978840 | orchestrator | 2025-03-11 01:06:40.978854 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-11 01:06:40.978874 | orchestrator | Tuesday 11 March 2025 01:06:26 +0000 (0:00:00.150) 0:00:23.670 ********* 2025-03-11 01:06:44.036896 | orchestrator | 2025-03-11 01:06:44.037070 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-11 01:06:44.037091 | orchestrator | Tuesday 11 March 2025 01:06:26 +0000 (0:00:00.162) 0:00:23.833 ********* 2025-03-11 01:06:44.037106 | orchestrator | 2025-03-11 01:06:44.037121 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-03-11 01:06:44.037135 | orchestrator | Tuesday 11 March 2025 01:06:26 +0000 (0:00:00.273) 0:00:24.107 ********* 2025-03-11 01:06:44.037150 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:06:44.037165 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:06:44.037180 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:06:44.037194 | orchestrator | 2025-03-11 01:06:44.037209 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-03-11 01:06:44.037223 | orchestrator | Tuesday 11 March 2025 01:06:31 +0000 (0:00:04.963) 0:00:29.071 ********* 2025-03-11 01:06:44.037237 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:06:44.037251 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:06:44.037266 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:06:44.037299 | orchestrator | 2025-03-11 01:06:44.037313 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:06:44.037328 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:06:44.037344 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:06:44.037358 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:06:44.037372 | orchestrator | 2025-03-11 01:06:44.037386 | orchestrator | 2025-03-11 01:06:44.037400 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:06:44.037415 | orchestrator | Tuesday 11 March 2025 01:06:39 +0000 (0:00:07.312) 0:00:36.383 ********* 2025-03-11 01:06:44.037451 | orchestrator | =============================================================================== 2025-03-11 01:06:44.037469 | orchestrator | redis : Copying over redis config files --------------------------------- 8.15s 2025-03-11 01:06:44.037486 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.31s 2025-03-11 01:06:44.037506 | orchestrator | redis : Copying over default config.json files -------------------------- 5.72s 2025-03-11 01:06:44.037522 | orchestrator | redis : Restart redis container ----------------------------------------- 4.96s 2025-03-11 01:06:44.037539 | orchestrator | redis : Check redis containers ------------------------------------------ 4.23s 2025-03-11 01:06:44.037556 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.17s 2025-03-11 01:06:44.037589 | orchestrator | redis : include_tasks --------------------------------------------------- 1.23s 2025-03-11 01:06:44.037616 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2025-03-11 01:06:44.037633 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.59s 2025-03-11 01:06:44.037650 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-03-11 01:06:44.037690 | orchestrator | 2025-03-11 01:06:44 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:44.041684 | orchestrator | 2025-03-11 01:06:44 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:44.041739 | orchestrator | 2025-03-11 01:06:44 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:47.093758 | orchestrator | 2025-03-11 01:06:44 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:47.093877 | orchestrator | 2025-03-11 01:06:44 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:06:47.093896 | orchestrator | 2025-03-11 01:06:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:47.093961 | orchestrator | 2025-03-11 01:06:47 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:47.094098 | orchestrator | 2025-03-11 01:06:47 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:47.094127 | orchestrator | 2025-03-11 01:06:47 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:47.094554 | orchestrator | 2025-03-11 01:06:47 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:47.095090 | orchestrator | 2025-03-11 01:06:47 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:06:50.169663 | orchestrator | 2025-03-11 01:06:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:50.169806 | orchestrator | 2025-03-11 01:06:50 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:50.170825 | orchestrator | 2025-03-11 01:06:50 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:50.173569 | orchestrator | 2025-03-11 01:06:50 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:50.175487 | orchestrator | 2025-03-11 01:06:50 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:50.179287 | orchestrator | 2025-03-11 01:06:50 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:06:53.235880 | orchestrator | 2025-03-11 01:06:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:53.236046 | orchestrator | 2025-03-11 01:06:53 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:53.241604 | orchestrator | 2025-03-11 01:06:53 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:53.247911 | orchestrator | 2025-03-11 01:06:53 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:53.251074 | orchestrator | 2025-03-11 01:06:53 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:53.252212 | orchestrator | 2025-03-11 01:06:53 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:06:53.252424 | orchestrator | 2025-03-11 01:06:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:56.295758 | orchestrator | 2025-03-11 01:06:56 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:56.297549 | orchestrator | 2025-03-11 01:06:56 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:56.299509 | orchestrator | 2025-03-11 01:06:56 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:56.300542 | orchestrator | 2025-03-11 01:06:56 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:56.301562 | orchestrator | 2025-03-11 01:06:56 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:06:56.302282 | orchestrator | 2025-03-11 01:06:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:06:59.371405 | orchestrator | 2025-03-11 01:06:59 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:06:59.372186 | orchestrator | 2025-03-11 01:06:59 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:06:59.372212 | orchestrator | 2025-03-11 01:06:59 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:06:59.373751 | orchestrator | 2025-03-11 01:06:59 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:06:59.376080 | orchestrator | 2025-03-11 01:06:59 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:02.440974 | orchestrator | 2025-03-11 01:06:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:02.441121 | orchestrator | 2025-03-11 01:07:02 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:02.441827 | orchestrator | 2025-03-11 01:07:02 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:02.441863 | orchestrator | 2025-03-11 01:07:02 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:02.448300 | orchestrator | 2025-03-11 01:07:02 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:02.451749 | orchestrator | 2025-03-11 01:07:02 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:05.518716 | orchestrator | 2025-03-11 01:07:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:05.518857 | orchestrator | 2025-03-11 01:07:05 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:05.519185 | orchestrator | 2025-03-11 01:07:05 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:05.520338 | orchestrator | 2025-03-11 01:07:05 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:05.523401 | orchestrator | 2025-03-11 01:07:05 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:08.595205 | orchestrator | 2025-03-11 01:07:05 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:08.595323 | orchestrator | 2025-03-11 01:07:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:08.595388 | orchestrator | 2025-03-11 01:07:08 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:11.656370 | orchestrator | 2025-03-11 01:07:08 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:11.656487 | orchestrator | 2025-03-11 01:07:08 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:11.656504 | orchestrator | 2025-03-11 01:07:08 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:11.656517 | orchestrator | 2025-03-11 01:07:08 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:11.656531 | orchestrator | 2025-03-11 01:07:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:11.656560 | orchestrator | 2025-03-11 01:07:11 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:11.669609 | orchestrator | 2025-03-11 01:07:11 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:14.763901 | orchestrator | 2025-03-11 01:07:11 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:14.764067 | orchestrator | 2025-03-11 01:07:11 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:14.764089 | orchestrator | 2025-03-11 01:07:11 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:14.764105 | orchestrator | 2025-03-11 01:07:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:14.764138 | orchestrator | 2025-03-11 01:07:14 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:14.768232 | orchestrator | 2025-03-11 01:07:14 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:14.769853 | orchestrator | 2025-03-11 01:07:14 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:14.771656 | orchestrator | 2025-03-11 01:07:14 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:14.773219 | orchestrator | 2025-03-11 01:07:14 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:17.841506 | orchestrator | 2025-03-11 01:07:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:17.841648 | orchestrator | 2025-03-11 01:07:17 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:17.841733 | orchestrator | 2025-03-11 01:07:17 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:17.846222 | orchestrator | 2025-03-11 01:07:17 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:17.846336 | orchestrator | 2025-03-11 01:07:17 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:17.847350 | orchestrator | 2025-03-11 01:07:17 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:17.850145 | orchestrator | 2025-03-11 01:07:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:20.917214 | orchestrator | 2025-03-11 01:07:20 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:20.918127 | orchestrator | 2025-03-11 01:07:20 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:20.918175 | orchestrator | 2025-03-11 01:07:20 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:20.918648 | orchestrator | 2025-03-11 01:07:20 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:20.918681 | orchestrator | 2025-03-11 01:07:20 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:20.919111 | orchestrator | 2025-03-11 01:07:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:23.989697 | orchestrator | 2025-03-11 01:07:23 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:24.003146 | orchestrator | 2025-03-11 01:07:24 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:24.010180 | orchestrator | 2025-03-11 01:07:24 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:24.011009 | orchestrator | 2025-03-11 01:07:24 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:24.020214 | orchestrator | 2025-03-11 01:07:24 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:27.075121 | orchestrator | 2025-03-11 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:27.075262 | orchestrator | 2025-03-11 01:07:27 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:27.075528 | orchestrator | 2025-03-11 01:07:27 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:27.076347 | orchestrator | 2025-03-11 01:07:27 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:27.077529 | orchestrator | 2025-03-11 01:07:27 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:27.079897 | orchestrator | 2025-03-11 01:07:27 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:30.143919 | orchestrator | 2025-03-11 01:07:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:30.144107 | orchestrator | 2025-03-11 01:07:30 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:30.144549 | orchestrator | 2025-03-11 01:07:30 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:30.144652 | orchestrator | 2025-03-11 01:07:30 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:30.144684 | orchestrator | 2025-03-11 01:07:30 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:30.145075 | orchestrator | 2025-03-11 01:07:30 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:33.196585 | orchestrator | 2025-03-11 01:07:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:33.196717 | orchestrator | 2025-03-11 01:07:33 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:33.198339 | orchestrator | 2025-03-11 01:07:33 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:33.199846 | orchestrator | 2025-03-11 01:07:33 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:33.206067 | orchestrator | 2025-03-11 01:07:33 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:36.277824 | orchestrator | 2025-03-11 01:07:33 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:36.278089 | orchestrator | 2025-03-11 01:07:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:36.278136 | orchestrator | 2025-03-11 01:07:36 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:36.278221 | orchestrator | 2025-03-11 01:07:36 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:36.278244 | orchestrator | 2025-03-11 01:07:36 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:36.279159 | orchestrator | 2025-03-11 01:07:36 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:36.283515 | orchestrator | 2025-03-11 01:07:36 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:39.337863 | orchestrator | 2025-03-11 01:07:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:39.338083 | orchestrator | 2025-03-11 01:07:39 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:39.339716 | orchestrator | 2025-03-11 01:07:39 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:39.340361 | orchestrator | 2025-03-11 01:07:39 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:39.340391 | orchestrator | 2025-03-11 01:07:39 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:39.343335 | orchestrator | 2025-03-11 01:07:39 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:42.401451 | orchestrator | 2025-03-11 01:07:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:42.401587 | orchestrator | 2025-03-11 01:07:42 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:42.402978 | orchestrator | 2025-03-11 01:07:42 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:42.403060 | orchestrator | 2025-03-11 01:07:42 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:42.407189 | orchestrator | 2025-03-11 01:07:42 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:42.409342 | orchestrator | 2025-03-11 01:07:42 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:45.456972 | orchestrator | 2025-03-11 01:07:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:45.457137 | orchestrator | 2025-03-11 01:07:45 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:45.457446 | orchestrator | 2025-03-11 01:07:45 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:45.457477 | orchestrator | 2025-03-11 01:07:45 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:45.457500 | orchestrator | 2025-03-11 01:07:45 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:45.459547 | orchestrator | 2025-03-11 01:07:45 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:48.518561 | orchestrator | 2025-03-11 01:07:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:48.518689 | orchestrator | 2025-03-11 01:07:48 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state STARTED 2025-03-11 01:07:48.522147 | orchestrator | 2025-03-11 01:07:48 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:48.524649 | orchestrator | 2025-03-11 01:07:48 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:48.528558 | orchestrator | 2025-03-11 01:07:48 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:48.530858 | orchestrator | 2025-03-11 01:07:48 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:48.531085 | orchestrator | 2025-03-11 01:07:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:51.573929 | orchestrator | 2025-03-11 01:07:51 | INFO  | Task f0e7df1b-99d4-4fc3-b695-7250df5a5358 is in state SUCCESS 2025-03-11 01:07:51.575550 | orchestrator | 2025-03-11 01:07:51.575604 | orchestrator | 2025-03-11 01:07:51.575620 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:07:51.575635 | orchestrator | 2025-03-11 01:07:51.575649 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:07:51.575663 | orchestrator | Tuesday 11 March 2025 01:06:03 +0000 (0:00:00.569) 0:00:00.569 ********* 2025-03-11 01:07:51.575721 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:07:51.575736 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:07:51.575751 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:07:51.575765 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:07:51.575779 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:07:51.575793 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:07:51.575807 | orchestrator | 2025-03-11 01:07:51.575821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:07:51.575835 | orchestrator | Tuesday 11 March 2025 01:06:04 +0000 (0:00:01.643) 0:00:02.213 ********* 2025-03-11 01:07:51.575849 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:07:51.575864 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:07:51.575878 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:07:51.575892 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:07:51.575906 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:07:51.575919 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-11 01:07:51.575933 | orchestrator | 2025-03-11 01:07:51.575947 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-03-11 01:07:51.575961 | orchestrator | 2025-03-11 01:07:51.576008 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-03-11 01:07:51.576024 | orchestrator | Tuesday 11 March 2025 01:06:06 +0000 (0:00:01.301) 0:00:03.515 ********* 2025-03-11 01:07:51.576039 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:07:51.576055 | orchestrator | 2025-03-11 01:07:51.576069 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-11 01:07:51.576083 | orchestrator | Tuesday 11 March 2025 01:06:09 +0000 (0:00:03.521) 0:00:07.036 ********* 2025-03-11 01:07:51.576097 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-11 01:07:51.576111 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-11 01:07:51.576126 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-11 01:07:51.576142 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-11 01:07:51.576158 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-11 01:07:51.576173 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-11 01:07:51.576189 | orchestrator | 2025-03-11 01:07:51.576205 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-11 01:07:51.576220 | orchestrator | Tuesday 11 March 2025 01:06:12 +0000 (0:00:03.326) 0:00:10.362 ********* 2025-03-11 01:07:51.576236 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-11 01:07:51.576252 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-11 01:07:51.576268 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-11 01:07:51.576284 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-11 01:07:51.576300 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-11 01:07:51.576315 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-11 01:07:51.576331 | orchestrator | 2025-03-11 01:07:51.576347 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-11 01:07:51.576362 | orchestrator | Tuesday 11 March 2025 01:06:19 +0000 (0:00:06.314) 0:00:16.677 ********* 2025-03-11 01:07:51.576395 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-03-11 01:07:51.576411 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:07:51.576428 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-03-11 01:07:51.576444 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-03-11 01:07:51.576459 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:07:51.576475 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-03-11 01:07:51.576491 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:07:51.576505 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-03-11 01:07:51.576519 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:07:51.576533 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:07:51.576547 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-03-11 01:07:51.576561 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:07:51.576575 | orchestrator | 2025-03-11 01:07:51.576588 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-03-11 01:07:51.576602 | orchestrator | Tuesday 11 March 2025 01:06:24 +0000 (0:00:05.533) 0:00:22.211 ********* 2025-03-11 01:07:51.576617 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:07:51.576631 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:07:51.576645 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:07:51.576659 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:07:51.576673 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:07:51.576686 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:07:51.576700 | orchestrator | 2025-03-11 01:07:51.576714 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-03-11 01:07:51.576728 | orchestrator | Tuesday 11 March 2025 01:06:25 +0000 (0:00:00.781) 0:00:22.992 ********* 2025-03-11 01:07:51.576759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.576968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577047 | orchestrator | 2025-03-11 01:07:51.577062 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-03-11 01:07:51.577112 | orchestrator | Tuesday 11 March 2025 01:06:28 +0000 (0:00:02.702) 0:00:25.695 ********* 2025-03-11 01:07:51.577128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577220 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577339 | orchestrator | 2025-03-11 01:07:51.577354 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-03-11 01:07:51.577368 | orchestrator | Tuesday 11 March 2025 01:06:34 +0000 (0:00:06.119) 0:00:31.814 ********* 2025-03-11 01:07:51.577383 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:07:51.577396 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:07:51.577411 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:07:51.577424 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:07:51.577438 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:07:51.577452 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:07:51.577466 | orchestrator | 2025-03-11 01:07:51.577480 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-03-11 01:07:51.577520 | orchestrator | Tuesday 11 March 2025 01:06:38 +0000 (0:00:03.689) 0:00:35.503 ********* 2025-03-11 01:07:51.577536 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:07:51.577549 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:07:51.577563 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:07:51.577577 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:07:51.577591 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:07:51.577605 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:07:51.577619 | orchestrator | 2025-03-11 01:07:51.577633 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-03-11 01:07:51.577648 | orchestrator | Tuesday 11 March 2025 01:06:41 +0000 (0:00:03.343) 0:00:38.847 ********* 2025-03-11 01:07:51.577661 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:07:51.577675 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:07:51.577689 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:07:51.577703 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:07:51.577717 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:07:51.577731 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:07:51.577744 | orchestrator | 2025-03-11 01:07:51.577770 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-03-11 01:07:51.577785 | orchestrator | Tuesday 11 March 2025 01:06:43 +0000 (0:00:01.874) 0:00:40.722 ********* 2025-03-11 01:07:51.577799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-11 01:07:51.577955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.578075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.578097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-11 01:07:51.578111 | orchestrator | 2025-03-11 01:07:51.578126 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:07:51.578141 | orchestrator | Tuesday 11 March 2025 01:06:47 +0000 (0:00:03.874) 0:00:44.597 ********* 2025-03-11 01:07:51.578155 | orchestrator | 2025-03-11 01:07:51.578169 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:07:51.578183 | orchestrator | Tuesday 11 March 2025 01:06:47 +0000 (0:00:00.150) 0:00:44.747 ********* 2025-03-11 01:07:51.578197 | orchestrator | 2025-03-11 01:07:51.578212 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:07:51.578226 | orchestrator | Tuesday 11 March 2025 01:06:47 +0000 (0:00:00.351) 0:00:45.099 ********* 2025-03-11 01:07:51.578240 | orchestrator | 2025-03-11 01:07:51.578255 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:07:51.578269 | orchestrator | Tuesday 11 March 2025 01:06:47 +0000 (0:00:00.171) 0:00:45.271 ********* 2025-03-11 01:07:51.578283 | orchestrator | 2025-03-11 01:07:51.578296 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:07:51.578310 | orchestrator | Tuesday 11 March 2025 01:06:48 +0000 (0:00:00.426) 0:00:45.697 ********* 2025-03-11 01:07:51.578324 | orchestrator | 2025-03-11 01:07:51.578338 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-11 01:07:51.578352 | orchestrator | Tuesday 11 March 2025 01:06:48 +0000 (0:00:00.190) 0:00:45.887 ********* 2025-03-11 01:07:51.578366 | orchestrator | 2025-03-11 01:07:51.578380 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-03-11 01:07:51.578394 | orchestrator | Tuesday 11 March 2025 01:06:49 +0000 (0:00:00.785) 0:00:46.673 ********* 2025-03-11 01:07:51.578452 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:07:51.578468 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:07:51.578482 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:07:51.578496 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:07:51.578510 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:07:51.578524 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:07:51.578550 | orchestrator | 2025-03-11 01:07:51.578563 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-03-11 01:07:51.578576 | orchestrator | Tuesday 11 March 2025 01:06:57 +0000 (0:00:08.195) 0:00:54.868 ********* 2025-03-11 01:07:51.578588 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:07:51.578601 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:07:51.578613 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:07:51.578625 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:07:51.578638 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:07:51.578650 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:07:51.578662 | orchestrator | 2025-03-11 01:07:51.578675 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-11 01:07:51.578688 | orchestrator | Tuesday 11 March 2025 01:07:01 +0000 (0:00:03.699) 0:00:58.568 ********* 2025-03-11 01:07:51.578700 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:07:51.578713 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:07:51.578725 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:07:51.578738 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:07:51.578760 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:07:51.578774 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:07:51.578787 | orchestrator | 2025-03-11 01:07:51.578808 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-03-11 01:07:51.578821 | orchestrator | Tuesday 11 March 2025 01:07:13 +0000 (0:00:12.634) 0:01:11.202 ********* 2025-03-11 01:07:51.578834 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-03-11 01:07:51.578847 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-03-11 01:07:51.578860 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-03-11 01:07:51.578873 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-03-11 01:07:51.578885 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-03-11 01:07:51.578898 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-03-11 01:07:51.578910 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-03-11 01:07:51.578923 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-03-11 01:07:51.578936 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-03-11 01:07:51.578948 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-03-11 01:07:51.578961 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-03-11 01:07:51.578988 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-03-11 01:07:51.579002 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:07:51.579015 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:07:51.579027 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:07:51.579040 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:07:51.579052 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:07:51.579071 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-11 01:07:51.579084 | orchestrator | 2025-03-11 01:07:51.579097 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-03-11 01:07:51.579109 | orchestrator | Tuesday 11 March 2025 01:07:25 +0000 (0:00:12.093) 0:01:23.296 ********* 2025-03-11 01:07:51.579122 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-03-11 01:07:51.579135 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:07:51.579148 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-03-11 01:07:51.579161 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:07:51.579173 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-03-11 01:07:51.579186 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:07:51.579198 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-03-11 01:07:51.579211 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-03-11 01:07:51.579223 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-03-11 01:07:51.579236 | orchestrator | 2025-03-11 01:07:51.579248 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-03-11 01:07:51.579261 | orchestrator | Tuesday 11 March 2025 01:07:30 +0000 (0:00:04.505) 0:01:27.802 ********* 2025-03-11 01:07:51.579274 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-03-11 01:07:51.579286 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:07:51.579299 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-03-11 01:07:51.579311 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:07:51.579324 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-03-11 01:07:51.579336 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:07:51.579349 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-03-11 01:07:51.579361 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-03-11 01:07:51.579374 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-03-11 01:07:51.579387 | orchestrator | 2025-03-11 01:07:51.579399 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-11 01:07:51.579412 | orchestrator | Tuesday 11 March 2025 01:07:35 +0000 (0:00:05.667) 0:01:33.469 ********* 2025-03-11 01:07:51.579424 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:07:51.579437 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:07:51.579449 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:07:51.579462 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:07:51.579475 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:07:51.579487 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:07:51.579500 | orchestrator | 2025-03-11 01:07:51.579512 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:07:51.579530 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 01:07:51.579622 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 01:07:51.579638 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-11 01:07:51.579651 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 01:07:51.579663 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 01:07:51.579681 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-11 01:07:51.579694 | orchestrator | 2025-03-11 01:07:51.579719 | orchestrator | 2025-03-11 01:07:51.579732 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:07:51.579745 | orchestrator | Tuesday 11 March 2025 01:07:48 +0000 (0:00:12.567) 0:01:46.037 ********* 2025-03-11 01:07:51.579757 | orchestrator | =============================================================================== 2025-03-11 01:07:51.579774 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 25.20s 2025-03-11 01:07:51.579787 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 12.09s 2025-03-11 01:07:51.579799 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.20s 2025-03-11 01:07:51.579811 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 6.32s 2025-03-11 01:07:51.579824 | orchestrator | openvswitch : Copying over config.json files for services --------------- 6.12s 2025-03-11 01:07:51.579874 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.67s 2025-03-11 01:07:51.579887 | orchestrator | module-load : Drop module persistence ----------------------------------- 5.53s 2025-03-11 01:07:51.579899 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.51s 2025-03-11 01:07:51.579912 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.87s 2025-03-11 01:07:51.579924 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.70s 2025-03-11 01:07:51.579937 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.69s 2025-03-11 01:07:51.579949 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.52s 2025-03-11 01:07:51.579962 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.34s 2025-03-11 01:07:51.580031 | orchestrator | module-load : Load modules ---------------------------------------------- 3.33s 2025-03-11 01:07:51.580047 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.70s 2025-03-11 01:07:51.580060 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.08s 2025-03-11 01:07:51.580072 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.87s 2025-03-11 01:07:51.580085 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.64s 2025-03-11 01:07:51.580097 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.30s 2025-03-11 01:07:51.580109 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.78s 2025-03-11 01:07:51.580122 | orchestrator | 2025-03-11 01:07:51 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:51.580139 | orchestrator | 2025-03-11 01:07:51 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:07:51.580153 | orchestrator | 2025-03-11 01:07:51 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:51.580167 | orchestrator | 2025-03-11 01:07:51 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:51.580221 | orchestrator | 2025-03-11 01:07:51 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:51.582799 | orchestrator | 2025-03-11 01:07:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:54.667504 | orchestrator | 2025-03-11 01:07:54 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:54.669321 | orchestrator | 2025-03-11 01:07:54 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:07:54.676102 | orchestrator | 2025-03-11 01:07:54 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:54.677222 | orchestrator | 2025-03-11 01:07:54 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:54.678415 | orchestrator | 2025-03-11 01:07:54 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:07:57.717797 | orchestrator | 2025-03-11 01:07:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:07:57.718071 | orchestrator | 2025-03-11 01:07:57 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:07:57.718176 | orchestrator | 2025-03-11 01:07:57 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:07:57.718560 | orchestrator | 2025-03-11 01:07:57 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:07:57.719317 | orchestrator | 2025-03-11 01:07:57 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:07:57.720043 | orchestrator | 2025-03-11 01:07:57 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:00.764680 | orchestrator | 2025-03-11 01:07:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:00.764818 | orchestrator | 2025-03-11 01:08:00 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:00.764948 | orchestrator | 2025-03-11 01:08:00 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:00.767305 | orchestrator | 2025-03-11 01:08:00 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:00.769000 | orchestrator | 2025-03-11 01:08:00 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:00.769949 | orchestrator | 2025-03-11 01:08:00 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:00.770277 | orchestrator | 2025-03-11 01:08:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:03.817226 | orchestrator | 2025-03-11 01:08:03 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:03.818710 | orchestrator | 2025-03-11 01:08:03 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:03.819599 | orchestrator | 2025-03-11 01:08:03 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:03.820525 | orchestrator | 2025-03-11 01:08:03 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:03.822609 | orchestrator | 2025-03-11 01:08:03 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:06.863018 | orchestrator | 2025-03-11 01:08:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:06.863156 | orchestrator | 2025-03-11 01:08:06 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:06.865561 | orchestrator | 2025-03-11 01:08:06 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:06.866433 | orchestrator | 2025-03-11 01:08:06 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:06.866471 | orchestrator | 2025-03-11 01:08:06 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:06.867348 | orchestrator | 2025-03-11 01:08:06 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:09.919094 | orchestrator | 2025-03-11 01:08:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:09.919226 | orchestrator | 2025-03-11 01:08:09 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:09.919511 | orchestrator | 2025-03-11 01:08:09 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:09.919545 | orchestrator | 2025-03-11 01:08:09 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:09.920288 | orchestrator | 2025-03-11 01:08:09 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:09.921026 | orchestrator | 2025-03-11 01:08:09 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:12.993965 | orchestrator | 2025-03-11 01:08:09 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:12.994134 | orchestrator | 2025-03-11 01:08:12 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:12.996658 | orchestrator | 2025-03-11 01:08:12 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:13.002000 | orchestrator | 2025-03-11 01:08:12 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:13.006145 | orchestrator | 2025-03-11 01:08:13 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:13.020565 | orchestrator | 2025-03-11 01:08:13 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:16.060717 | orchestrator | 2025-03-11 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:16.060850 | orchestrator | 2025-03-11 01:08:16 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:16.062095 | orchestrator | 2025-03-11 01:08:16 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:16.062819 | orchestrator | 2025-03-11 01:08:16 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:16.063741 | orchestrator | 2025-03-11 01:08:16 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:16.064970 | orchestrator | 2025-03-11 01:08:16 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:19.131580 | orchestrator | 2025-03-11 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:19.131724 | orchestrator | 2025-03-11 01:08:19 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:19.132356 | orchestrator | 2025-03-11 01:08:19 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:19.132978 | orchestrator | 2025-03-11 01:08:19 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:19.135051 | orchestrator | 2025-03-11 01:08:19 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:19.135563 | orchestrator | 2025-03-11 01:08:19 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:19.137508 | orchestrator | 2025-03-11 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:22.181969 | orchestrator | 2025-03-11 01:08:22 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:22.182219 | orchestrator | 2025-03-11 01:08:22 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:22.182262 | orchestrator | 2025-03-11 01:08:22 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:22.184808 | orchestrator | 2025-03-11 01:08:22 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:22.185491 | orchestrator | 2025-03-11 01:08:22 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:22.185624 | orchestrator | 2025-03-11 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:25.239086 | orchestrator | 2025-03-11 01:08:25 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:25.239324 | orchestrator | 2025-03-11 01:08:25 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:25.244299 | orchestrator | 2025-03-11 01:08:25 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:25.245119 | orchestrator | 2025-03-11 01:08:25 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:25.246195 | orchestrator | 2025-03-11 01:08:25 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:28.300597 | orchestrator | 2025-03-11 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:28.300725 | orchestrator | 2025-03-11 01:08:28 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:28.302819 | orchestrator | 2025-03-11 01:08:28 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:28.302854 | orchestrator | 2025-03-11 01:08:28 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:28.305809 | orchestrator | 2025-03-11 01:08:28 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:28.307720 | orchestrator | 2025-03-11 01:08:28 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:31.363038 | orchestrator | 2025-03-11 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:31.363165 | orchestrator | 2025-03-11 01:08:31 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:31.367102 | orchestrator | 2025-03-11 01:08:31 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:31.371052 | orchestrator | 2025-03-11 01:08:31 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:31.372795 | orchestrator | 2025-03-11 01:08:31 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:31.376774 | orchestrator | 2025-03-11 01:08:31 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:34.418791 | orchestrator | 2025-03-11 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:34.418966 | orchestrator | 2025-03-11 01:08:34 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:34.420396 | orchestrator | 2025-03-11 01:08:34 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:34.420431 | orchestrator | 2025-03-11 01:08:34 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:34.423786 | orchestrator | 2025-03-11 01:08:34 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:34.425974 | orchestrator | 2025-03-11 01:08:34 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:37.497358 | orchestrator | 2025-03-11 01:08:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:37.497479 | orchestrator | 2025-03-11 01:08:37 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:37.499289 | orchestrator | 2025-03-11 01:08:37 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:37.501031 | orchestrator | 2025-03-11 01:08:37 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:37.502189 | orchestrator | 2025-03-11 01:08:37 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:37.504302 | orchestrator | 2025-03-11 01:08:37 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:40.557216 | orchestrator | 2025-03-11 01:08:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:40.557425 | orchestrator | 2025-03-11 01:08:40 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:40.558977 | orchestrator | 2025-03-11 01:08:40 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:40.562301 | orchestrator | 2025-03-11 01:08:40 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:40.568124 | orchestrator | 2025-03-11 01:08:40 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:40.569091 | orchestrator | 2025-03-11 01:08:40 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:43.656270 | orchestrator | 2025-03-11 01:08:40 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:43.656415 | orchestrator | 2025-03-11 01:08:43 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:43.667149 | orchestrator | 2025-03-11 01:08:43 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:43.673781 | orchestrator | 2025-03-11 01:08:43 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:43.675319 | orchestrator | 2025-03-11 01:08:43 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:43.678586 | orchestrator | 2025-03-11 01:08:43 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:43.679374 | orchestrator | 2025-03-11 01:08:43 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:46.767321 | orchestrator | 2025-03-11 01:08:46 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:46.773470 | orchestrator | 2025-03-11 01:08:46 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:46.776032 | orchestrator | 2025-03-11 01:08:46 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:46.778568 | orchestrator | 2025-03-11 01:08:46 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:46.783292 | orchestrator | 2025-03-11 01:08:46 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:49.845947 | orchestrator | 2025-03-11 01:08:46 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:49.846142 | orchestrator | 2025-03-11 01:08:49 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:49.846485 | orchestrator | 2025-03-11 01:08:49 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:49.846519 | orchestrator | 2025-03-11 01:08:49 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:49.847379 | orchestrator | 2025-03-11 01:08:49 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:49.847998 | orchestrator | 2025-03-11 01:08:49 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:49.848204 | orchestrator | 2025-03-11 01:08:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:52.885968 | orchestrator | 2025-03-11 01:08:52 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:52.886593 | orchestrator | 2025-03-11 01:08:52 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:52.888430 | orchestrator | 2025-03-11 01:08:52 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:52.889022 | orchestrator | 2025-03-11 01:08:52 | INFO  | Task 645d0e1a-47fe-48f1-871f-e7bd6a45537f is in state STARTED 2025-03-11 01:08:52.891166 | orchestrator | 2025-03-11 01:08:52 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:52.891939 | orchestrator | 2025-03-11 01:08:52 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:55.968302 | orchestrator | 2025-03-11 01:08:52 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:55.968436 | orchestrator | 2025-03-11 01:08:55 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:55.972487 | orchestrator | 2025-03-11 01:08:55 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:55.974520 | orchestrator | 2025-03-11 01:08:55 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:55.974557 | orchestrator | 2025-03-11 01:08:55 | INFO  | Task 645d0e1a-47fe-48f1-871f-e7bd6a45537f is in state STARTED 2025-03-11 01:08:55.975310 | orchestrator | 2025-03-11 01:08:55 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:55.978316 | orchestrator | 2025-03-11 01:08:55 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:08:59.060497 | orchestrator | 2025-03-11 01:08:55 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:08:59.060635 | orchestrator | 2025-03-11 01:08:59 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state STARTED 2025-03-11 01:08:59.063261 | orchestrator | 2025-03-11 01:08:59 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:08:59.067031 | orchestrator | 2025-03-11 01:08:59 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:08:59.073289 | orchestrator | 2025-03-11 01:08:59 | INFO  | Task 645d0e1a-47fe-48f1-871f-e7bd6a45537f is in state STARTED 2025-03-11 01:08:59.078567 | orchestrator | 2025-03-11 01:08:59 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:08:59.079465 | orchestrator | 2025-03-11 01:08:59 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:02.124585 | orchestrator | 2025-03-11 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:02.124712 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task cbba2294-f2ed-4873-b840-9da91a758c7a is in state STARTED 2025-03-11 01:09:02.130527 | orchestrator | 2025-03-11 01:09:02.130606 | orchestrator | 2025-03-11 01:09:02.130622 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-03-11 01:09:02.130636 | orchestrator | 2025-03-11 01:09:02.130649 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-03-11 01:09:02.130663 | orchestrator | Tuesday 11 March 2025 01:04:01 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-03-11 01:09:02.130676 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:09:02.130690 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:09:02.130704 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:09:02.130716 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.130729 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.130741 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.130754 | orchestrator | 2025-03-11 01:09:02.130767 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-03-11 01:09:02.130780 | orchestrator | Tuesday 11 March 2025 01:04:02 +0000 (0:00:01.507) 0:00:01.777 ********* 2025-03-11 01:09:02.130792 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.130806 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.130819 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.130831 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.130844 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.130933 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.130982 | orchestrator | 2025-03-11 01:09:02.130996 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-03-11 01:09:02.131009 | orchestrator | Tuesday 11 March 2025 01:04:07 +0000 (0:00:04.131) 0:00:05.908 ********* 2025-03-11 01:09:02.131021 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.131034 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.131046 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.131059 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.131071 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.131084 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.131097 | orchestrator | 2025-03-11 01:09:02.131112 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-03-11 01:09:02.131127 | orchestrator | Tuesday 11 March 2025 01:04:13 +0000 (0:00:05.951) 0:00:11.860 ********* 2025-03-11 01:09:02.131142 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:09:02.131157 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:09:02.131172 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.131186 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.131201 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:09:02.131215 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.131229 | orchestrator | 2025-03-11 01:09:02.131243 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-03-11 01:09:02.131257 | orchestrator | Tuesday 11 March 2025 01:04:16 +0000 (0:00:03.085) 0:00:14.945 ********* 2025-03-11 01:09:02.131272 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:09:02.131285 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:09:02.131299 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.131314 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:09:02.131327 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.131342 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.131356 | orchestrator | 2025-03-11 01:09:02.131370 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-03-11 01:09:02.131390 | orchestrator | Tuesday 11 March 2025 01:04:21 +0000 (0:00:05.068) 0:00:20.014 ********* 2025-03-11 01:09:02.131405 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:09:02.131419 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:09:02.131433 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:09:02.131448 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.131462 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.131474 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.131491 | orchestrator | 2025-03-11 01:09:02.131504 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-03-11 01:09:02.131517 | orchestrator | Tuesday 11 March 2025 01:04:23 +0000 (0:00:02.292) 0:00:22.307 ********* 2025-03-11 01:09:02.131529 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.131541 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.131553 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.131566 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.131578 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.131590 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.131603 | orchestrator | 2025-03-11 01:09:02.131615 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-03-11 01:09:02.131629 | orchestrator | Tuesday 11 March 2025 01:04:25 +0000 (0:00:01.954) 0:00:24.261 ********* 2025-03-11 01:09:02.131641 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.131654 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.131666 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.131678 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.131691 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.131703 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.131715 | orchestrator | 2025-03-11 01:09:02.131728 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-03-11 01:09:02.131740 | orchestrator | Tuesday 11 March 2025 01:04:27 +0000 (0:00:02.306) 0:00:26.568 ********* 2025-03-11 01:09:02.131759 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:09:02.131772 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:09:02.131784 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.131797 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:09:02.131809 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:09:02.131821 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.131834 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:09:02.131846 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:09:02.131858 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.131871 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:09:02.131910 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:09:02.131924 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.131937 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:09:02.131949 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:09:02.131961 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.131974 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-03-11 01:09:02.131986 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-03-11 01:09:02.131999 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.132011 | orchestrator | 2025-03-11 01:09:02.132024 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-03-11 01:09:02.132036 | orchestrator | Tuesday 11 March 2025 01:04:28 +0000 (0:00:01.076) 0:00:27.644 ********* 2025-03-11 01:09:02.132048 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.132061 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.132073 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.132086 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.132133 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.132147 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.132159 | orchestrator | 2025-03-11 01:09:02.132172 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-03-11 01:09:02.132198 | orchestrator | Tuesday 11 March 2025 01:04:31 +0000 (0:00:02.757) 0:00:30.402 ********* 2025-03-11 01:09:02.132210 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:09:02.132223 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:09:02.132235 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:09:02.132247 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.132260 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.132272 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.132284 | orchestrator | 2025-03-11 01:09:02.132297 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-03-11 01:09:02.132310 | orchestrator | Tuesday 11 March 2025 01:04:34 +0000 (0:00:02.489) 0:00:32.891 ********* 2025-03-11 01:09:02.132322 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:09:02.132334 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.132347 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:09:02.132359 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.132372 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:09:02.132384 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.132397 | orchestrator | 2025-03-11 01:09:02.132409 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-03-11 01:09:02.132422 | orchestrator | Tuesday 11 March 2025 01:04:40 +0000 (0:00:05.961) 0:00:38.853 ********* 2025-03-11 01:09:02.132434 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.132454 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.132467 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.132479 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.132491 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.132504 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.132516 | orchestrator | 2025-03-11 01:09:02.132529 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-03-11 01:09:02.132541 | orchestrator | Tuesday 11 March 2025 01:04:41 +0000 (0:00:01.508) 0:00:40.361 ********* 2025-03-11 01:09:02.132554 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.132566 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.132578 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.132591 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.132603 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.132615 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.132628 | orchestrator | 2025-03-11 01:09:02.132640 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-03-11 01:09:02.132654 | orchestrator | Tuesday 11 March 2025 01:04:44 +0000 (0:00:03.215) 0:00:43.576 ********* 2025-03-11 01:09:02.132666 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.132679 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.132691 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.132709 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.132721 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.132734 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.132746 | orchestrator | 2025-03-11 01:09:02.132759 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-03-11 01:09:02.132771 | orchestrator | Tuesday 11 March 2025 01:04:45 +0000 (0:00:01.049) 0:00:44.626 ********* 2025-03-11 01:09:02.132784 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-03-11 01:09:02.132801 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-03-11 01:09:02.132814 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.132826 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-03-11 01:09:02.132839 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-03-11 01:09:02.132852 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.132864 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-03-11 01:09:02.132893 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-03-11 01:09:02.132906 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.132919 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-03-11 01:09:02.132931 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-03-11 01:09:02.132944 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.132957 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-03-11 01:09:02.132969 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-03-11 01:09:02.132982 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.132994 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-03-11 01:09:02.133006 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-03-11 01:09:02.133019 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.133031 | orchestrator | 2025-03-11 01:09:02.133044 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-03-11 01:09:02.133063 | orchestrator | Tuesday 11 March 2025 01:04:46 +0000 (0:00:00.918) 0:00:45.545 ********* 2025-03-11 01:09:02.133076 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.133089 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.133101 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.133114 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.133126 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.133138 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.133151 | orchestrator | 2025-03-11 01:09:02.133164 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-03-11 01:09:02.133183 | orchestrator | 2025-03-11 01:09:02.133196 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-03-11 01:09:02.133208 | orchestrator | Tuesday 11 March 2025 01:04:48 +0000 (0:00:02.088) 0:00:47.633 ********* 2025-03-11 01:09:02.133221 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.133233 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.133246 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.133258 | orchestrator | 2025-03-11 01:09:02.133271 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-03-11 01:09:02.133283 | orchestrator | Tuesday 11 March 2025 01:04:50 +0000 (0:00:01.512) 0:00:49.146 ********* 2025-03-11 01:09:02.133296 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.133308 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.133320 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.133333 | orchestrator | 2025-03-11 01:09:02.133345 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-03-11 01:09:02.133358 | orchestrator | Tuesday 11 March 2025 01:04:51 +0000 (0:00:01.297) 0:00:50.443 ********* 2025-03-11 01:09:02.133371 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.133383 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.133395 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.133408 | orchestrator | 2025-03-11 01:09:02.133420 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-03-11 01:09:02.133433 | orchestrator | Tuesday 11 March 2025 01:04:53 +0000 (0:00:01.458) 0:00:51.901 ********* 2025-03-11 01:09:02.133445 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.133458 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.133470 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.133483 | orchestrator | 2025-03-11 01:09:02.133495 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-03-11 01:09:02.133508 | orchestrator | Tuesday 11 March 2025 01:04:53 +0000 (0:00:00.832) 0:00:52.734 ********* 2025-03-11 01:09:02.133533 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.133547 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.133559 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.133572 | orchestrator | 2025-03-11 01:09:02.133584 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-03-11 01:09:02.133597 | orchestrator | Tuesday 11 March 2025 01:04:54 +0000 (0:00:00.397) 0:00:53.132 ********* 2025-03-11 01:09:02.133610 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:09:02.133623 | orchestrator | 2025-03-11 01:09:02.133636 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-03-11 01:09:02.133648 | orchestrator | Tuesday 11 March 2025 01:04:55 +0000 (0:00:01.065) 0:00:54.198 ********* 2025-03-11 01:09:02.133661 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.133673 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.133686 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.133698 | orchestrator | 2025-03-11 01:09:02.133711 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-03-11 01:09:02.133723 | orchestrator | Tuesday 11 March 2025 01:04:57 +0000 (0:00:02.365) 0:00:56.564 ********* 2025-03-11 01:09:02.133736 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.133748 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.133761 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.133773 | orchestrator | 2025-03-11 01:09:02.133786 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-03-11 01:09:02.133798 | orchestrator | Tuesday 11 March 2025 01:04:59 +0000 (0:00:01.550) 0:00:58.114 ********* 2025-03-11 01:09:02.133811 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.133823 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.133836 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.133848 | orchestrator | 2025-03-11 01:09:02.133861 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-03-11 01:09:02.133908 | orchestrator | Tuesday 11 March 2025 01:05:00 +0000 (0:00:01.471) 0:00:59.585 ********* 2025-03-11 01:09:02.133922 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.133934 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.133947 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.133960 | orchestrator | 2025-03-11 01:09:02.133972 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-03-11 01:09:02.133985 | orchestrator | Tuesday 11 March 2025 01:05:03 +0000 (0:00:02.268) 0:01:01.854 ********* 2025-03-11 01:09:02.133997 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.134010 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.134076 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.134089 | orchestrator | 2025-03-11 01:09:02.134103 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-03-11 01:09:02.134116 | orchestrator | Tuesday 11 March 2025 01:05:03 +0000 (0:00:00.586) 0:01:02.441 ********* 2025-03-11 01:09:02.134128 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.134141 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.134153 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.134165 | orchestrator | 2025-03-11 01:09:02.134178 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-03-11 01:09:02.134191 | orchestrator | Tuesday 11 March 2025 01:05:04 +0000 (0:00:00.509) 0:01:02.950 ********* 2025-03-11 01:09:02.134203 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.134216 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.134228 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.134246 | orchestrator | 2025-03-11 01:09:02.134259 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-03-11 01:09:02.134284 | orchestrator | Tuesday 11 March 2025 01:05:05 +0000 (0:00:01.489) 0:01:04.440 ********* 2025-03-11 01:09:02.134308 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-11 01:09:02.134322 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-11 01:09:02.134335 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-03-11 01:09:02.134348 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-11 01:09:02.134361 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-11 01:09:02.134374 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-03-11 01:09:02.134386 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-11 01:09:02.134399 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-11 01:09:02.134411 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-03-11 01:09:02.134424 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-11 01:09:02.134442 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-11 01:09:02.134454 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-03-11 01:09:02.134467 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.134492 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.134505 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.134518 | orchestrator | 2025-03-11 01:09:02.134530 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-03-11 01:09:02.134543 | orchestrator | Tuesday 11 March 2025 01:05:50 +0000 (0:00:45.356) 0:01:49.797 ********* 2025-03-11 01:09:02.134556 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.134568 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.134581 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.134593 | orchestrator | 2025-03-11 01:09:02.134610 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-03-11 01:09:02.134623 | orchestrator | Tuesday 11 March 2025 01:05:51 +0000 (0:00:00.378) 0:01:50.175 ********* 2025-03-11 01:09:02.134636 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.134648 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.134661 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.134673 | orchestrator | 2025-03-11 01:09:02.134686 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-03-11 01:09:02.134698 | orchestrator | Tuesday 11 March 2025 01:05:52 +0000 (0:00:00.993) 0:01:51.169 ********* 2025-03-11 01:09:02.134711 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.134723 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.134759 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.134772 | orchestrator | 2025-03-11 01:09:02.134784 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-03-11 01:09:02.134797 | orchestrator | Tuesday 11 March 2025 01:05:53 +0000 (0:00:01.307) 0:01:52.477 ********* 2025-03-11 01:09:02.134810 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.134822 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.134835 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.134847 | orchestrator | 2025-03-11 01:09:02.134860 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-03-11 01:09:02.134873 | orchestrator | Tuesday 11 March 2025 01:06:08 +0000 (0:00:14.685) 0:02:07.162 ********* 2025-03-11 01:09:02.134939 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.134952 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.134965 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.134977 | orchestrator | 2025-03-11 01:09:02.134990 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-03-11 01:09:02.135003 | orchestrator | Tuesday 11 March 2025 01:06:09 +0000 (0:00:01.238) 0:02:08.401 ********* 2025-03-11 01:09:02.135015 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.135028 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.135040 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.135053 | orchestrator | 2025-03-11 01:09:02.135066 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-03-11 01:09:02.135079 | orchestrator | Tuesday 11 March 2025 01:06:10 +0000 (0:00:01.035) 0:02:09.436 ********* 2025-03-11 01:09:02.135091 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.135104 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.135117 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.135129 | orchestrator | 2025-03-11 01:09:02.135142 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-03-11 01:09:02.135155 | orchestrator | Tuesday 11 March 2025 01:06:11 +0000 (0:00:00.927) 0:02:10.364 ********* 2025-03-11 01:09:02.135168 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.135181 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.135194 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.135206 | orchestrator | 2025-03-11 01:09:02.135219 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-03-11 01:09:02.135232 | orchestrator | Tuesday 11 March 2025 01:06:12 +0000 (0:00:01.250) 0:02:11.614 ********* 2025-03-11 01:09:02.135250 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.135263 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.135275 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.135295 | orchestrator | 2025-03-11 01:09:02.135308 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-03-11 01:09:02.135321 | orchestrator | Tuesday 11 March 2025 01:06:13 +0000 (0:00:00.598) 0:02:12.213 ********* 2025-03-11 01:09:02.135333 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.135346 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.135358 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.135371 | orchestrator | 2025-03-11 01:09:02.135383 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-03-11 01:09:02.135396 | orchestrator | Tuesday 11 March 2025 01:06:14 +0000 (0:00:01.109) 0:02:13.322 ********* 2025-03-11 01:09:02.135408 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.135421 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.135434 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.135446 | orchestrator | 2025-03-11 01:09:02.135459 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-03-11 01:09:02.135471 | orchestrator | Tuesday 11 March 2025 01:06:15 +0000 (0:00:00.846) 0:02:14.169 ********* 2025-03-11 01:09:02.135484 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.135496 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.135509 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.135521 | orchestrator | 2025-03-11 01:09:02.135534 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-03-11 01:09:02.135547 | orchestrator | Tuesday 11 March 2025 01:06:16 +0000 (0:00:01.531) 0:02:15.700 ********* 2025-03-11 01:09:02.135559 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:02.135572 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:02.135584 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:02.135596 | orchestrator | 2025-03-11 01:09:02.135609 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-03-11 01:09:02.135622 | orchestrator | Tuesday 11 March 2025 01:06:17 +0000 (0:00:01.053) 0:02:16.754 ********* 2025-03-11 01:09:02.135634 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.135647 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.135659 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.135671 | orchestrator | 2025-03-11 01:09:02.135684 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-03-11 01:09:02.135696 | orchestrator | Tuesday 11 March 2025 01:06:18 +0000 (0:00:00.769) 0:02:17.523 ********* 2025-03-11 01:09:02.135709 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.135721 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.135734 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.135746 | orchestrator | 2025-03-11 01:09:02.135759 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-03-11 01:09:02.135771 | orchestrator | Tuesday 11 March 2025 01:06:19 +0000 (0:00:00.829) 0:02:18.352 ********* 2025-03-11 01:09:02.135784 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.135796 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.135809 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.135821 | orchestrator | 2025-03-11 01:09:02.135834 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-03-11 01:09:02.135847 | orchestrator | Tuesday 11 March 2025 01:06:21 +0000 (0:00:01.741) 0:02:20.093 ********* 2025-03-11 01:09:02.135859 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.135894 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.135909 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.135922 | orchestrator | 2025-03-11 01:09:02.135935 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-03-11 01:09:02.135952 | orchestrator | Tuesday 11 March 2025 01:06:22 +0000 (0:00:01.317) 0:02:21.411 ********* 2025-03-11 01:09:02.135965 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-11 01:09:02.135978 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-11 01:09:02.135998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-11 01:09:02.136011 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-03-11 01:09:02.136024 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-11 01:09:02.136036 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-11 01:09:02.136060 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-03-11 01:09:02.136074 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-03-11 01:09:02.136086 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-11 01:09:02.136107 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-03-11 01:09:02.136120 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-11 01:09:02.136133 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-03-11 01:09:02.136145 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-11 01:09:02.136158 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-11 01:09:02.136170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-03-11 01:09:02.136183 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-11 01:09:02.136201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-11 01:09:02.136214 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-03-11 01:09:02.136227 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-11 01:09:02.136239 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-03-11 01:09:02.136252 | orchestrator | 2025-03-11 01:09:02.136265 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-03-11 01:09:02.136277 | orchestrator | 2025-03-11 01:09:02.136290 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-03-11 01:09:02.136302 | orchestrator | Tuesday 11 March 2025 01:06:26 +0000 (0:00:04.182) 0:02:25.594 ********* 2025-03-11 01:09:02.136315 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:09:02.136327 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:09:02.136340 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:09:02.136353 | orchestrator | 2025-03-11 01:09:02.136365 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-03-11 01:09:02.136378 | orchestrator | Tuesday 11 March 2025 01:06:27 +0000 (0:00:00.810) 0:02:26.405 ********* 2025-03-11 01:09:02.136390 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:09:02.136403 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:09:02.136415 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:09:02.136428 | orchestrator | 2025-03-11 01:09:02.136440 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-03-11 01:09:02.136452 | orchestrator | Tuesday 11 March 2025 01:06:28 +0000 (0:00:01.058) 0:02:27.463 ********* 2025-03-11 01:09:02.136465 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:09:02.136477 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:09:02.136490 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:09:02.136502 | orchestrator | 2025-03-11 01:09:02.136514 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-03-11 01:09:02.136527 | orchestrator | Tuesday 11 March 2025 01:06:29 +0000 (0:00:00.704) 0:02:28.167 ********* 2025-03-11 01:09:02.136539 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-11 01:09:02.136559 | orchestrator | 2025-03-11 01:09:02.136572 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-03-11 01:09:02.136585 | orchestrator | Tuesday 11 March 2025 01:06:30 +0000 (0:00:01.171) 0:02:29.339 ********* 2025-03-11 01:09:02.136598 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.136610 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.136623 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.136635 | orchestrator | 2025-03-11 01:09:02.136648 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-03-11 01:09:02.136660 | orchestrator | Tuesday 11 March 2025 01:06:31 +0000 (0:00:00.494) 0:02:29.834 ********* 2025-03-11 01:09:02.136673 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.136685 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.136698 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.136710 | orchestrator | 2025-03-11 01:09:02.136723 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-03-11 01:09:02.136735 | orchestrator | Tuesday 11 March 2025 01:06:31 +0000 (0:00:00.517) 0:02:30.351 ********* 2025-03-11 01:09:02.136748 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.136760 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.136773 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.136785 | orchestrator | 2025-03-11 01:09:02.136798 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-03-11 01:09:02.136810 | orchestrator | Tuesday 11 March 2025 01:06:32 +0000 (0:00:00.608) 0:02:30.960 ********* 2025-03-11 01:09:02.136823 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:09:02.136835 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:09:02.136848 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:09:02.136860 | orchestrator | 2025-03-11 01:09:02.136873 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-03-11 01:09:02.136938 | orchestrator | Tuesday 11 March 2025 01:06:35 +0000 (0:00:03.198) 0:02:34.159 ********* 2025-03-11 01:09:02.136951 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:09:02.136964 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:09:02.136977 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:09:02.136989 | orchestrator | 2025-03-11 01:09:02.137002 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-03-11 01:09:02.137014 | orchestrator | 2025-03-11 01:09:02.137027 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-03-11 01:09:02.137040 | orchestrator | Tuesday 11 March 2025 01:06:45 +0000 (0:00:10.321) 0:02:44.480 ********* 2025-03-11 01:09:02.137052 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:02.137065 | orchestrator | 2025-03-11 01:09:02.137077 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-03-11 01:09:02.137090 | orchestrator | Tuesday 11 March 2025 01:06:46 +0000 (0:00:00.646) 0:02:45.126 ********* 2025-03-11 01:09:02.137103 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137115 | orchestrator | 2025-03-11 01:09:02.137128 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-11 01:09:02.137141 | orchestrator | Tuesday 11 March 2025 01:06:46 +0000 (0:00:00.520) 0:02:45.647 ********* 2025-03-11 01:09:02.137153 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-11 01:09:02.137166 | orchestrator | 2025-03-11 01:09:02.137179 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-11 01:09:02.137196 | orchestrator | Tuesday 11 March 2025 01:06:47 +0000 (0:00:00.798) 0:02:46.446 ********* 2025-03-11 01:09:02.137209 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137222 | orchestrator | 2025-03-11 01:09:02.137235 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-03-11 01:09:02.137247 | orchestrator | Tuesday 11 March 2025 01:06:48 +0000 (0:00:01.037) 0:02:47.483 ********* 2025-03-11 01:09:02.137260 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137279 | orchestrator | 2025-03-11 01:09:02.137292 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-03-11 01:09:02.137311 | orchestrator | Tuesday 11 March 2025 01:06:49 +0000 (0:00:00.730) 0:02:48.214 ********* 2025-03-11 01:09:02.137324 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:09:02.137337 | orchestrator | 2025-03-11 01:09:02.137349 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-03-11 01:09:02.137362 | orchestrator | Tuesday 11 March 2025 01:06:50 +0000 (0:00:01.102) 0:02:49.316 ********* 2025-03-11 01:09:02.137374 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:09:02.137387 | orchestrator | 2025-03-11 01:09:02.137400 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-03-11 01:09:02.137412 | orchestrator | Tuesday 11 March 2025 01:06:51 +0000 (0:00:00.635) 0:02:49.952 ********* 2025-03-11 01:09:02.137425 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137438 | orchestrator | 2025-03-11 01:09:02.137450 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-03-11 01:09:02.137463 | orchestrator | Tuesday 11 March 2025 01:06:51 +0000 (0:00:00.582) 0:02:50.535 ********* 2025-03-11 01:09:02.137475 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137487 | orchestrator | 2025-03-11 01:09:02.137500 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-03-11 01:09:02.137512 | orchestrator | 2025-03-11 01:09:02.137525 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-03-11 01:09:02.137538 | orchestrator | Tuesday 11 March 2025 01:06:52 +0000 (0:00:00.600) 0:02:51.135 ********* 2025-03-11 01:09:02.137550 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-03-11 01:09:02.137563 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:02.137575 | orchestrator | 2025-03-11 01:09:02.137588 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-03-11 01:09:02.137600 | orchestrator | Tuesday 11 March 2025 01:06:52 +0000 (0:00:00.159) 0:02:51.295 ********* 2025-03-11 01:09:02.137613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-03-11 01:09:02.137627 | orchestrator | 2025-03-11 01:09:02.137640 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-03-11 01:09:02.137652 | orchestrator | Tuesday 11 March 2025 01:06:52 +0000 (0:00:00.260) 0:02:51.555 ********* 2025-03-11 01:09:02.137665 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:02.137678 | orchestrator | 2025-03-11 01:09:02.137690 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-03-11 01:09:02.137703 | orchestrator | Tuesday 11 March 2025 01:06:54 +0000 (0:00:02.145) 0:02:53.700 ********* 2025-03-11 01:09:02.137715 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:02.137728 | orchestrator | 2025-03-11 01:09:02.137741 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-03-11 01:09:02.137753 | orchestrator | Tuesday 11 March 2025 01:06:56 +0000 (0:00:01.844) 0:02:55.544 ********* 2025-03-11 01:09:02.137765 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137778 | orchestrator | 2025-03-11 01:09:02.137790 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-03-11 01:09:02.137803 | orchestrator | Tuesday 11 March 2025 01:06:57 +0000 (0:00:00.883) 0:02:56.428 ********* 2025-03-11 01:09:02.137815 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:02.137828 | orchestrator | 2025-03-11 01:09:02.137840 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-03-11 01:09:02.137853 | orchestrator | Tuesday 11 March 2025 01:06:58 +0000 (0:00:00.625) 0:02:57.054 ********* 2025-03-11 01:09:02.137865 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137894 | orchestrator | 2025-03-11 01:09:02.137907 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-03-11 01:09:02.137920 | orchestrator | Tuesday 11 March 2025 01:07:06 +0000 (0:00:08.274) 0:03:05.328 ********* 2025-03-11 01:09:02.137939 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.137951 | orchestrator | 2025-03-11 01:09:02.137964 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-03-11 01:09:02.137976 | orchestrator | Tuesday 11 March 2025 01:07:23 +0000 (0:00:17.065) 0:03:22.394 ********* 2025-03-11 01:09:02.137989 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:02.138001 | orchestrator | 2025-03-11 01:09:02.138014 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-03-11 01:09:02.138064 | orchestrator | 2025-03-11 01:09:02.138077 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-03-11 01:09:02.138095 | orchestrator | Tuesday 11 March 2025 01:07:24 +0000 (0:00:00.798) 0:03:23.192 ********* 2025-03-11 01:09:02.138108 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.138120 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.138133 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.138145 | orchestrator | 2025-03-11 01:09:02.138158 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-03-11 01:09:02.138170 | orchestrator | Tuesday 11 March 2025 01:07:25 +0000 (0:00:01.067) 0:03:24.260 ********* 2025-03-11 01:09:02.138183 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.138195 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.138223 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.138236 | orchestrator | 2025-03-11 01:09:02.138248 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-03-11 01:09:02.138261 | orchestrator | Tuesday 11 March 2025 01:07:25 +0000 (0:00:00.450) 0:03:24.711 ********* 2025-03-11 01:09:02.138273 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:09:02.138285 | orchestrator | 2025-03-11 01:09:02.138298 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-03-11 01:09:02.138310 | orchestrator | Tuesday 11 March 2025 01:07:26 +0000 (0:00:00.776) 0:03:25.488 ********* 2025-03-11 01:09:02.138323 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:09:02.138335 | orchestrator | 2025-03-11 01:09:02.138348 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-03-11 01:09:02.138360 | orchestrator | Tuesday 11 March 2025 01:07:27 +0000 (0:00:00.599) 0:03:26.087 ********* 2025-03-11 01:09:02.138379 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 01:09:02.138392 | orchestrator | 2025-03-11 01:09:02.138405 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-03-11 01:09:02.138417 | orchestrator | Tuesday 11 March 2025 01:07:28 +0000 (0:00:00.737) 0:03:26.825 ********* 2025-03-11 01:09:02.138430 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.138443 | orchestrator | 2025-03-11 01:09:02.138455 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-03-11 01:09:02.138468 | orchestrator | Tuesday 11 March 2025 01:07:28 +0000 (0:00:00.961) 0:03:27.786 ********* 2025-03-11 01:09:02.138480 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 01:09:02.138493 | orchestrator | 2025-03-11 01:09:02.138505 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-03-11 01:09:02.138518 | orchestrator | Tuesday 11 March 2025 01:07:30 +0000 (0:00:01.295) 0:03:29.081 ********* 2025-03-11 01:09:02.138530 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.138547 | orchestrator | 2025-03-11 01:09:02.138560 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-03-11 01:09:02.138573 | orchestrator | Tuesday 11 March 2025 01:07:30 +0000 (0:00:00.241) 0:03:29.323 ********* 2025-03-11 01:09:02.138586 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.138598 | orchestrator | 2025-03-11 01:09:02.138611 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-03-11 01:09:02.138623 | orchestrator | Tuesday 11 March 2025 01:07:30 +0000 (0:00:00.255) 0:03:29.579 ********* 2025-03-11 01:09:02.138636 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.138648 | orchestrator | 2025-03-11 01:09:02.138666 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-03-11 01:09:02.138679 | orchestrator | Tuesday 11 March 2025 01:07:31 +0000 (0:00:00.252) 0:03:29.831 ********* 2025-03-11 01:09:02.138692 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.138704 | orchestrator | 2025-03-11 01:09:02.138717 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-03-11 01:09:02.138729 | orchestrator | Tuesday 11 March 2025 01:07:31 +0000 (0:00:00.255) 0:03:30.086 ********* 2025-03-11 01:09:02.138747 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:09:02.138760 | orchestrator | 2025-03-11 01:09:02.138773 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-03-11 01:09:02.138786 | orchestrator | Tuesday 11 March 2025 01:07:44 +0000 (0:00:13.051) 0:03:43.138 ********* 2025-03-11 01:09:02.138798 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-03-11 01:09:02.138811 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-03-11 01:09:02.138823 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-03-11 01:09:02.138836 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-03-11 01:09:02.138849 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-03-11 01:09:02.138862 | orchestrator | 2025-03-11 01:09:02.138887 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-03-11 01:09:02.138901 | orchestrator | Tuesday 11 March 2025 01:08:25 +0000 (0:00:41.661) 0:04:24.800 ********* 2025-03-11 01:09:02.138914 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-11 01:09:02.138926 | orchestrator | 2025-03-11 01:09:02.138957 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-03-11 01:09:02.138970 | orchestrator | Tuesday 11 March 2025 01:08:27 +0000 (0:00:01.496) 0:04:26.297 ********* 2025-03-11 01:09:02.138988 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:09:02.139001 | orchestrator | 2025-03-11 01:09:02.139014 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-03-11 01:09:02.139026 | orchestrator | Tuesday 11 March 2025 01:08:28 +0000 (0:00:01.152) 0:04:27.450 ********* 2025-03-11 01:09:02.139039 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-03-11 01:09:02.139052 | orchestrator | 2025-03-11 01:09:02.139072 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-03-11 01:09:02.139085 | orchestrator | Tuesday 11 March 2025 01:08:29 +0000 (0:00:01.215) 0:04:28.666 ********* 2025-03-11 01:09:02.139176 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.139192 | orchestrator | 2025-03-11 01:09:02.139205 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-03-11 01:09:02.139217 | orchestrator | Tuesday 11 March 2025 01:08:30 +0000 (0:00:00.318) 0:04:28.984 ********* 2025-03-11 01:09:02.139230 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-03-11 01:09:02.139242 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-03-11 01:09:02.139255 | orchestrator | 2025-03-11 01:09:02.139267 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-03-11 01:09:02.139280 | orchestrator | Tuesday 11 March 2025 01:08:32 +0000 (0:00:02.316) 0:04:31.300 ********* 2025-03-11 01:09:02.139293 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.139305 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.139318 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.139330 | orchestrator | 2025-03-11 01:09:02.139343 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-03-11 01:09:02.139355 | orchestrator | Tuesday 11 March 2025 01:08:32 +0000 (0:00:00.341) 0:04:31.642 ********* 2025-03-11 01:09:02.139368 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.139381 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.139393 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.139414 | orchestrator | 2025-03-11 01:09:02.139427 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-03-11 01:09:02.139440 | orchestrator | 2025-03-11 01:09:02.139452 | orchestrator | TASK [osism.commons.k9s : Gather variables for each operating system] ********** 2025-03-11 01:09:02.139465 | orchestrator | Tuesday 11 March 2025 01:08:34 +0000 (0:00:01.281) 0:04:32.924 ********* 2025-03-11 01:09:02.139477 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:02.139490 | orchestrator | 2025-03-11 01:09:02.139509 | orchestrator | TASK [osism.commons.k9s : Include distribution specific install tasks] ********* 2025-03-11 01:09:02.139522 | orchestrator | Tuesday 11 March 2025 01:08:34 +0000 (0:00:00.176) 0:04:33.100 ********* 2025-03-11 01:09:02.139535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-03-11 01:09:02.139548 | orchestrator | 2025-03-11 01:09:02.139560 | orchestrator | TASK [osism.commons.k9s : Install k9s packages] ******************************** 2025-03-11 01:09:02.139573 | orchestrator | Tuesday 11 March 2025 01:08:34 +0000 (0:00:00.512) 0:04:33.612 ********* 2025-03-11 01:09:02.139585 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:02.139598 | orchestrator | 2025-03-11 01:09:02.139611 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-03-11 01:09:02.139623 | orchestrator | 2025-03-11 01:09:02.139636 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-03-11 01:09:02.139648 | orchestrator | Tuesday 11 March 2025 01:08:41 +0000 (0:00:06.428) 0:04:40.041 ********* 2025-03-11 01:09:02.139661 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:09:02.139673 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:09:02.139686 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:09:02.139698 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:02.139711 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:02.139723 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:02.139736 | orchestrator | 2025-03-11 01:09:02.139749 | orchestrator | TASK [Manage labels] *********************************************************** 2025-03-11 01:09:02.139761 | orchestrator | Tuesday 11 March 2025 01:08:41 +0000 (0:00:00.729) 0:04:40.770 ********* 2025-03-11 01:09:02.139774 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-11 01:09:02.139787 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-11 01:09:02.139800 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-03-11 01:09:02.139812 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-11 01:09:02.139825 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-11 01:09:02.139838 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-11 01:09:02.139850 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-11 01:09:02.139862 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-03-11 01:09:02.139893 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-03-11 01:09:02.139907 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-11 01:09:02.139920 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-11 01:09:02.139933 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-11 01:09:02.139950 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-11 01:09:02.139964 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-03-11 01:09:02.139976 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-11 01:09:02.139989 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-03-11 01:09:02.140007 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-11 01:09:02.140020 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-11 01:09:02.140032 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-03-11 01:09:02.140044 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-11 01:09:02.140057 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-03-11 01:09:02.140070 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-11 01:09:02.140082 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-11 01:09:02.140094 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-11 01:09:02.140107 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-03-11 01:09:02.140119 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-11 01:09:02.140132 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-03-11 01:09:02.140144 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-11 01:09:02.140157 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-11 01:09:02.140169 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-03-11 01:09:02.140182 | orchestrator | 2025-03-11 01:09:02.140195 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-03-11 01:09:02.140207 | orchestrator | Tuesday 11 March 2025 01:08:57 +0000 (0:00:16.024) 0:04:56.795 ********* 2025-03-11 01:09:02.140220 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.140232 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.140245 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.140263 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.140276 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.140289 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.140301 | orchestrator | 2025-03-11 01:09:02.140314 | orchestrator | TASK [Manage taints] *********************************************************** 2025-03-11 01:09:02.140327 | orchestrator | Tuesday 11 March 2025 01:08:59 +0000 (0:00:01.238) 0:04:58.034 ********* 2025-03-11 01:09:02.140339 | orchestrator | skipping: [testbed-node-3] 2025-03-11 01:09:02.140352 | orchestrator | skipping: [testbed-node-4] 2025-03-11 01:09:02.140364 | orchestrator | skipping: [testbed-node-5] 2025-03-11 01:09:02.140377 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:02.140390 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:02.140402 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:02.140415 | orchestrator | 2025-03-11 01:09:02.140427 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:09:02.140440 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:09:02.140453 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-03-11 01:09:02.140466 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-03-11 01:09:02.140479 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-03-11 01:09:02.140492 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-11 01:09:02.140510 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-11 01:09:02.140523 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-03-11 01:09:02.140536 | orchestrator | 2025-03-11 01:09:02.140548 | orchestrator | 2025-03-11 01:09:02.140561 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:09:02.140573 | orchestrator | Tuesday 11 March 2025 01:09:00 +0000 (0:00:01.017) 0:04:59.051 ********* 2025-03-11 01:09:02.140586 | orchestrator | =============================================================================== 2025-03-11 01:09:02.140599 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.36s 2025-03-11 01:09:02.140611 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.66s 2025-03-11 01:09:02.140624 | orchestrator | osism.commons.kubectl : Install required packages ---------------------- 17.07s 2025-03-11 01:09:02.140637 | orchestrator | Manage labels ---------------------------------------------------------- 16.02s 2025-03-11 01:09:02.140649 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.69s 2025-03-11 01:09:02.140662 | orchestrator | k3s_server_post : Install Cilium --------------------------------------- 13.05s 2025-03-11 01:09:02.140675 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.32s 2025-03-11 01:09:02.140691 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 8.27s 2025-03-11 01:09:02.140704 | orchestrator | osism.commons.k9s : Install k9s packages -------------------------------- 6.43s 2025-03-11 01:09:02.140717 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.96s 2025-03-11 01:09:02.140730 | orchestrator | k3s_prereq : Set SELinux to disabled state ------------------------------ 5.95s 2025-03-11 01:09:02.140742 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 5.07s 2025-03-11 01:09:02.140755 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.18s 2025-03-11 01:09:02.140768 | orchestrator | k3s_prereq : Set same timezone on every Server -------------------------- 4.13s 2025-03-11 01:09:02.140780 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.22s 2025-03-11 01:09:02.140793 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 3.20s 2025-03-11 01:09:02.140805 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.09s 2025-03-11 01:09:02.140818 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.76s 2025-03-11 01:09:02.140830 | orchestrator | k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries --- 2.49s 2025-03-11 01:09:02.140844 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.37s 2025-03-11 01:09:02.140856 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task b788c338-5c04-4687-838a-8b0a8e04cfab is in state SUCCESS 2025-03-11 01:09:02.140869 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:02.140931 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:02.140951 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task 74456c27-d93a-451b-bd8c-176949a0b429 is in state STARTED 2025-03-11 01:09:05.221832 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task 645d0e1a-47fe-48f1-871f-e7bd6a45537f is in state STARTED 2025-03-11 01:09:05.222002 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:05.222092 | orchestrator | 2025-03-11 01:09:02 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:05.222137 | orchestrator | 2025-03-11 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:05.222171 | orchestrator | 2025-03-11 01:09:05 | INFO  | Task cbba2294-f2ed-4873-b840-9da91a758c7a is in state STARTED 2025-03-11 01:09:05.224806 | orchestrator | 2025-03-11 01:09:05 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:05.228988 | orchestrator | 2025-03-11 01:09:05 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:05.232216 | orchestrator | 2025-03-11 01:09:05 | INFO  | Task 74456c27-d93a-451b-bd8c-176949a0b429 is in state STARTED 2025-03-11 01:09:05.237491 | orchestrator | 2025-03-11 01:09:05 | INFO  | Task 645d0e1a-47fe-48f1-871f-e7bd6a45537f is in state STARTED 2025-03-11 01:09:05.239974 | orchestrator | 2025-03-11 01:09:05 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:05.240004 | orchestrator | 2025-03-11 01:09:05 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:05.241391 | orchestrator | 2025-03-11 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:08.290469 | orchestrator | 2025-03-11 01:09:08 | INFO  | Task cbba2294-f2ed-4873-b840-9da91a758c7a is in state STARTED 2025-03-11 01:09:11.334575 | orchestrator | 2025-03-11 01:09:08 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:11.334689 | orchestrator | 2025-03-11 01:09:08 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:11.334707 | orchestrator | 2025-03-11 01:09:08 | INFO  | Task 74456c27-d93a-451b-bd8c-176949a0b429 is in state STARTED 2025-03-11 01:09:11.334721 | orchestrator | 2025-03-11 01:09:08 | INFO  | Task 645d0e1a-47fe-48f1-871f-e7bd6a45537f is in state SUCCESS 2025-03-11 01:09:11.334734 | orchestrator | 2025-03-11 01:09:08 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:11.334747 | orchestrator | 2025-03-11 01:09:08 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:11.334761 | orchestrator | 2025-03-11 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:11.334791 | orchestrator | 2025-03-11 01:09:11 | INFO  | Task cbba2294-f2ed-4873-b840-9da91a758c7a is in state STARTED 2025-03-11 01:09:14.399256 | orchestrator | 2025-03-11 01:09:11 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:14.399378 | orchestrator | 2025-03-11 01:09:11 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:14.399397 | orchestrator | 2025-03-11 01:09:11 | INFO  | Task 74456c27-d93a-451b-bd8c-176949a0b429 is in state STARTED 2025-03-11 01:09:14.399412 | orchestrator | 2025-03-11 01:09:11 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:14.399427 | orchestrator | 2025-03-11 01:09:11 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:14.399442 | orchestrator | 2025-03-11 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:14.399473 | orchestrator | 2025-03-11 01:09:14 | INFO  | Task cbba2294-f2ed-4873-b840-9da91a758c7a is in state STARTED 2025-03-11 01:09:14.403630 | orchestrator | 2025-03-11 01:09:14 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:14.403668 | orchestrator | 2025-03-11 01:09:14 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:14.404126 | orchestrator | 2025-03-11 01:09:14 | INFO  | Task 74456c27-d93a-451b-bd8c-176949a0b429 is in state SUCCESS 2025-03-11 01:09:14.404759 | orchestrator | 2025-03-11 01:09:14 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:14.408266 | orchestrator | 2025-03-11 01:09:14 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:14.411496 | orchestrator | 2025-03-11 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:17.478590 | orchestrator | 2025-03-11 01:09:17 | INFO  | Task cbba2294-f2ed-4873-b840-9da91a758c7a is in state SUCCESS 2025-03-11 01:09:17.479686 | orchestrator | 2025-03-11 01:09:17 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:17.479735 | orchestrator | 2025-03-11 01:09:17 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:17.481578 | orchestrator | 2025-03-11 01:09:17 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:17.483440 | orchestrator | 2025-03-11 01:09:17 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:17.483664 | orchestrator | 2025-03-11 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:20.535222 | orchestrator | 2025-03-11 01:09:20 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:20.535601 | orchestrator | 2025-03-11 01:09:20 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:20.535635 | orchestrator | 2025-03-11 01:09:20 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:20.535659 | orchestrator | 2025-03-11 01:09:20 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:23.574351 | orchestrator | 2025-03-11 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:23.574494 | orchestrator | 2025-03-11 01:09:23 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:23.575620 | orchestrator | 2025-03-11 01:09:23 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:23.579190 | orchestrator | 2025-03-11 01:09:23 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:23.580957 | orchestrator | 2025-03-11 01:09:23 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:26.632191 | orchestrator | 2025-03-11 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:26.632298 | orchestrator | 2025-03-11 01:09:26 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:26.634790 | orchestrator | 2025-03-11 01:09:26 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:26.635387 | orchestrator | 2025-03-11 01:09:26 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:26.638065 | orchestrator | 2025-03-11 01:09:26 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:29.689921 | orchestrator | 2025-03-11 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:29.690089 | orchestrator | 2025-03-11 01:09:29 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:29.690564 | orchestrator | 2025-03-11 01:09:29 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:29.690596 | orchestrator | 2025-03-11 01:09:29 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:29.691247 | orchestrator | 2025-03-11 01:09:29 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:29.691901 | orchestrator | 2025-03-11 01:09:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:32.742414 | orchestrator | 2025-03-11 01:09:32 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:32.743033 | orchestrator | 2025-03-11 01:09:32 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:32.744999 | orchestrator | 2025-03-11 01:09:32 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:32.746904 | orchestrator | 2025-03-11 01:09:32 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:35.809334 | orchestrator | 2025-03-11 01:09:32 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:35.809479 | orchestrator | 2025-03-11 01:09:35 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:35.813543 | orchestrator | 2025-03-11 01:09:35 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:35.815279 | orchestrator | 2025-03-11 01:09:35 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:35.819030 | orchestrator | 2025-03-11 01:09:35 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state STARTED 2025-03-11 01:09:38.864242 | orchestrator | 2025-03-11 01:09:35 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:38.864383 | orchestrator | 2025-03-11 01:09:38 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:38.864998 | orchestrator | 2025-03-11 01:09:38 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:38.865029 | orchestrator | 2025-03-11 01:09:38 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:38.866063 | orchestrator | 2025-03-11 01:09:38 | INFO  | Task 2162321f-25fa-43fb-a7a6-e0a9a346d8b6 is in state SUCCESS 2025-03-11 01:09:38.867145 | orchestrator | 2025-03-11 01:09:38.867175 | orchestrator | None 2025-03-11 01:09:38.867190 | orchestrator | 2025-03-11 01:09:38.867205 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-03-11 01:09:38.867219 | orchestrator | 2025-03-11 01:09:38.867234 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-11 01:09:38.867248 | orchestrator | Tuesday 11 March 2025 01:09:09 +0000 (0:00:00.228) 0:00:00.229 ********* 2025-03-11 01:09:38.867263 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-11 01:09:38.867277 | orchestrator | 2025-03-11 01:09:38.867292 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-11 01:09:38.867306 | orchestrator | Tuesday 11 March 2025 01:09:10 +0000 (0:00:00.802) 0:00:01.031 ********* 2025-03-11 01:09:38.867321 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:38.867336 | orchestrator | 2025-03-11 01:09:38.867350 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-03-11 01:09:38.867364 | orchestrator | Tuesday 11 March 2025 01:09:11 +0000 (0:00:01.469) 0:00:02.501 ********* 2025-03-11 01:09:38.867378 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:38.867392 | orchestrator | 2025-03-11 01:09:38.867406 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:09:38.867420 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:09:38.867436 | orchestrator | 2025-03-11 01:09:38.867450 | orchestrator | 2025-03-11 01:09:38.867464 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:09:38.867478 | orchestrator | Tuesday 11 March 2025 01:09:12 +0000 (0:00:00.689) 0:00:03.190 ********* 2025-03-11 01:09:38.867492 | orchestrator | =============================================================================== 2025-03-11 01:09:38.867506 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2025-03-11 01:09:38.867549 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2025-03-11 01:09:38.867564 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.69s 2025-03-11 01:09:38.867578 | orchestrator | 2025-03-11 01:09:38.867592 | orchestrator | 2025-03-11 01:09:38.867606 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-03-11 01:09:38.867620 | orchestrator | 2025-03-11 01:09:38.867633 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-03-11 01:09:38.867647 | orchestrator | Tuesday 11 March 2025 01:09:08 +0000 (0:00:00.308) 0:00:00.308 ********* 2025-03-11 01:09:38.867661 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:38.867676 | orchestrator | 2025-03-11 01:09:38.867690 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-03-11 01:09:38.867703 | orchestrator | Tuesday 11 March 2025 01:09:09 +0000 (0:00:00.788) 0:00:01.097 ********* 2025-03-11 01:09:38.867717 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:38.867731 | orchestrator | 2025-03-11 01:09:38.867747 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-03-11 01:09:38.867777 | orchestrator | Tuesday 11 March 2025 01:09:10 +0000 (0:00:00.600) 0:00:01.698 ********* 2025-03-11 01:09:38.867794 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-03-11 01:09:38.867810 | orchestrator | 2025-03-11 01:09:38.867825 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-03-11 01:09:38.867870 | orchestrator | Tuesday 11 March 2025 01:09:11 +0000 (0:00:00.904) 0:00:02.602 ********* 2025-03-11 01:09:38.867886 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:38.867902 | orchestrator | 2025-03-11 01:09:38.867917 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-03-11 01:09:38.867933 | orchestrator | Tuesday 11 March 2025 01:09:12 +0000 (0:00:01.490) 0:00:04.093 ********* 2025-03-11 01:09:38.867949 | orchestrator | changed: [testbed-manager] 2025-03-11 01:09:38.867964 | orchestrator | 2025-03-11 01:09:38.867980 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-03-11 01:09:38.867995 | orchestrator | Tuesday 11 March 2025 01:09:13 +0000 (0:00:00.806) 0:00:04.899 ********* 2025-03-11 01:09:38.868011 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:09:38.868027 | orchestrator | 2025-03-11 01:09:38.868042 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-03-11 01:09:38.868058 | orchestrator | Tuesday 11 March 2025 01:09:15 +0000 (0:00:01.504) 0:00:06.403 ********* 2025-03-11 01:09:38.868074 | orchestrator | changed: [testbed-manager -> localhost] 2025-03-11 01:09:38.868089 | orchestrator | 2025-03-11 01:09:38.868103 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-03-11 01:09:38.868117 | orchestrator | Tuesday 11 March 2025 01:09:15 +0000 (0:00:00.664) 0:00:07.068 ********* 2025-03-11 01:09:38.868131 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:38.868146 | orchestrator | 2025-03-11 01:09:38.868160 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-03-11 01:09:38.868173 | orchestrator | Tuesday 11 March 2025 01:09:16 +0000 (0:00:00.515) 0:00:07.583 ********* 2025-03-11 01:09:38.868187 | orchestrator | ok: [testbed-manager] 2025-03-11 01:09:38.868201 | orchestrator | 2025-03-11 01:09:38.868215 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:09:38.868229 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:09:38.868243 | orchestrator | 2025-03-11 01:09:38.868257 | orchestrator | 2025-03-11 01:09:38.868271 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:09:38.868285 | orchestrator | Tuesday 11 March 2025 01:09:16 +0000 (0:00:00.454) 0:00:08.037 ********* 2025-03-11 01:09:38.868298 | orchestrator | =============================================================================== 2025-03-11 01:09:38.868313 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.50s 2025-03-11 01:09:38.868336 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.49s 2025-03-11 01:09:38.868350 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.90s 2025-03-11 01:09:38.868374 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.81s 2025-03-11 01:09:38.868389 | orchestrator | Get home directory of operator user ------------------------------------- 0.79s 2025-03-11 01:09:38.868403 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.66s 2025-03-11 01:09:38.868417 | orchestrator | Create .kube directory -------------------------------------------------- 0.60s 2025-03-11 01:09:38.868431 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.52s 2025-03-11 01:09:38.868445 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.45s 2025-03-11 01:09:38.868459 | orchestrator | 2025-03-11 01:09:38.868473 | orchestrator | 2025-03-11 01:09:38.868487 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-03-11 01:09:38.868500 | orchestrator | 2025-03-11 01:09:38.868514 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-11 01:09:38.868528 | orchestrator | Tuesday 11 March 2025 01:06:40 +0000 (0:00:00.097) 0:00:00.097 ********* 2025-03-11 01:09:38.868541 | orchestrator | ok: [localhost] => { 2025-03-11 01:09:38.868556 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-03-11 01:09:38.868570 | orchestrator | } 2025-03-11 01:09:38.868585 | orchestrator | 2025-03-11 01:09:38.868599 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-03-11 01:09:38.868613 | orchestrator | Tuesday 11 March 2025 01:06:40 +0000 (0:00:00.073) 0:00:00.171 ********* 2025-03-11 01:09:38.868628 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-03-11 01:09:38.868643 | orchestrator | ...ignoring 2025-03-11 01:09:38.868657 | orchestrator | 2025-03-11 01:09:38.868671 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-03-11 01:09:38.868691 | orchestrator | Tuesday 11 March 2025 01:06:43 +0000 (0:00:02.706) 0:00:02.877 ********* 2025-03-11 01:09:38.868705 | orchestrator | skipping: [localhost] 2025-03-11 01:09:38.868719 | orchestrator | 2025-03-11 01:09:38.868733 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-03-11 01:09:38.868747 | orchestrator | Tuesday 11 March 2025 01:06:43 +0000 (0:00:00.142) 0:00:03.020 ********* 2025-03-11 01:09:38.868761 | orchestrator | ok: [localhost] 2025-03-11 01:09:38.868775 | orchestrator | 2025-03-11 01:09:38.868789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:09:38.868803 | orchestrator | 2025-03-11 01:09:38.868817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:09:38.868831 | orchestrator | Tuesday 11 March 2025 01:06:44 +0000 (0:00:00.353) 0:00:03.374 ********* 2025-03-11 01:09:38.868870 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:38.868884 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:38.868899 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:38.868912 | orchestrator | 2025-03-11 01:09:38.868927 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:09:38.868940 | orchestrator | Tuesday 11 March 2025 01:06:45 +0000 (0:00:00.992) 0:00:04.366 ********* 2025-03-11 01:09:38.868954 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-03-11 01:09:38.868968 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-03-11 01:09:38.868982 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-03-11 01:09:38.868996 | orchestrator | 2025-03-11 01:09:38.869010 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-03-11 01:09:38.869024 | orchestrator | 2025-03-11 01:09:38.869038 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-11 01:09:38.869059 | orchestrator | Tuesday 11 March 2025 01:06:46 +0000 (0:00:01.043) 0:00:05.409 ********* 2025-03-11 01:09:38.869073 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:09:38.869087 | orchestrator | 2025-03-11 01:09:38.869101 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-11 01:09:38.869115 | orchestrator | Tuesday 11 March 2025 01:06:47 +0000 (0:00:01.133) 0:00:06.543 ********* 2025-03-11 01:09:38.869129 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:38.869142 | orchestrator | 2025-03-11 01:09:38.869156 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-03-11 01:09:38.869170 | orchestrator | Tuesday 11 March 2025 01:06:48 +0000 (0:00:01.703) 0:00:08.249 ********* 2025-03-11 01:09:38.869184 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.869198 | orchestrator | 2025-03-11 01:09:38.869212 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-03-11 01:09:38.869225 | orchestrator | Tuesday 11 March 2025 01:06:50 +0000 (0:00:01.520) 0:00:09.770 ********* 2025-03-11 01:09:38.869239 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.869253 | orchestrator | 2025-03-11 01:09:38.869267 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-03-11 01:09:38.869281 | orchestrator | Tuesday 11 March 2025 01:06:52 +0000 (0:00:01.794) 0:00:11.565 ********* 2025-03-11 01:09:38.869294 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.869308 | orchestrator | 2025-03-11 01:09:38.869322 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-03-11 01:09:38.869336 | orchestrator | Tuesday 11 March 2025 01:06:52 +0000 (0:00:00.520) 0:00:12.085 ********* 2025-03-11 01:09:38.869350 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.869363 | orchestrator | 2025-03-11 01:09:38.869378 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-11 01:09:38.869391 | orchestrator | Tuesday 11 March 2025 01:06:54 +0000 (0:00:01.607) 0:00:13.692 ********* 2025-03-11 01:09:38.869405 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:09:38.869419 | orchestrator | 2025-03-11 01:09:38.869433 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-11 01:09:38.869453 | orchestrator | Tuesday 11 March 2025 01:06:55 +0000 (0:00:01.367) 0:00:15.060 ********* 2025-03-11 01:09:38.869467 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:38.869481 | orchestrator | 2025-03-11 01:09:38.869495 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-03-11 01:09:38.869509 | orchestrator | Tuesday 11 March 2025 01:06:57 +0000 (0:00:01.400) 0:00:16.460 ********* 2025-03-11 01:09:38.869523 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.869542 | orchestrator | 2025-03-11 01:09:38.869557 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-03-11 01:09:38.869570 | orchestrator | Tuesday 11 March 2025 01:06:57 +0000 (0:00:00.528) 0:00:16.989 ********* 2025-03-11 01:09:38.869584 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.869598 | orchestrator | 2025-03-11 01:09:38.869612 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-03-11 01:09:38.869625 | orchestrator | Tuesday 11 March 2025 01:06:58 +0000 (0:00:00.976) 0:00:17.965 ********* 2025-03-11 01:09:38.869644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.869670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.869686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.869702 | orchestrator | 2025-03-11 01:09:38.869717 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-03-11 01:09:38.869736 | orchestrator | Tuesday 11 March 2025 01:07:01 +0000 (0:00:03.133) 0:00:21.099 ********* 2025-03-11 01:09:38.869765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.869781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.869802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.869817 | orchestrator | 2025-03-11 01:09:38.869848 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-03-11 01:09:38.869864 | orchestrator | Tuesday 11 March 2025 01:07:06 +0000 (0:00:04.752) 0:00:25.851 ********* 2025-03-11 01:09:38.869878 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-11 01:09:38.869892 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-11 01:09:38.869907 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-11 01:09:38.869921 | orchestrator | 2025-03-11 01:09:38.869935 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-03-11 01:09:38.869949 | orchestrator | Tuesday 11 March 2025 01:07:10 +0000 (0:00:03.820) 0:00:29.672 ********* 2025-03-11 01:09:38.869963 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-11 01:09:38.869977 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-11 01:09:38.869991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-11 01:09:38.870004 | orchestrator | 2025-03-11 01:09:38.870060 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-03-11 01:09:38.870084 | orchestrator | Tuesday 11 March 2025 01:07:16 +0000 (0:00:06.107) 0:00:35.780 ********* 2025-03-11 01:09:38.870099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-11 01:09:38.870113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-11 01:09:38.870127 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-11 01:09:38.870141 | orchestrator | 2025-03-11 01:09:38.870156 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-03-11 01:09:38.870170 | orchestrator | Tuesday 11 March 2025 01:07:20 +0000 (0:00:04.146) 0:00:39.926 ********* 2025-03-11 01:09:38.870191 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-11 01:09:38.870205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-11 01:09:38.870219 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-11 01:09:38.870233 | orchestrator | 2025-03-11 01:09:38.870247 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-03-11 01:09:38.870261 | orchestrator | Tuesday 11 March 2025 01:07:26 +0000 (0:00:06.205) 0:00:46.132 ********* 2025-03-11 01:09:38.870275 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-11 01:09:38.870289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-11 01:09:38.870303 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-11 01:09:38.870317 | orchestrator | 2025-03-11 01:09:38.870331 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-03-11 01:09:38.870345 | orchestrator | Tuesday 11 March 2025 01:07:29 +0000 (0:00:02.206) 0:00:48.339 ********* 2025-03-11 01:09:38.870359 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-11 01:09:38.870373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-11 01:09:38.870387 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-11 01:09:38.870401 | orchestrator | 2025-03-11 01:09:38.870415 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-11 01:09:38.870428 | orchestrator | Tuesday 11 March 2025 01:07:31 +0000 (0:00:02.377) 0:00:50.716 ********* 2025-03-11 01:09:38.870442 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.870456 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:38.870470 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:38.870484 | orchestrator | 2025-03-11 01:09:38.870498 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-03-11 01:09:38.870512 | orchestrator | Tuesday 11 March 2025 01:07:32 +0000 (0:00:01.057) 0:00:51.774 ********* 2025-03-11 01:09:38.870527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.870554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.870580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:09:38.870595 | orchestrator | 2025-03-11 01:09:38.870609 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-03-11 01:09:38.870623 | orchestrator | Tuesday 11 March 2025 01:07:35 +0000 (0:00:02.928) 0:00:54.703 ********* 2025-03-11 01:09:38.870637 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:38.870651 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:38.870665 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:38.870683 | orchestrator | 2025-03-11 01:09:38.870707 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-03-11 01:09:38.870730 | orchestrator | Tuesday 11 March 2025 01:07:36 +0000 (0:00:01.497) 0:00:56.200 ********* 2025-03-11 01:09:38.870754 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:38.870777 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:38.870795 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:38.870809 | orchestrator | 2025-03-11 01:09:38.870823 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-03-11 01:09:38.870890 | orchestrator | Tuesday 11 March 2025 01:07:46 +0000 (0:00:09.297) 0:01:05.498 ********* 2025-03-11 01:09:38.870906 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:38.870920 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:38.870934 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:38.870948 | orchestrator | 2025-03-11 01:09:38.870962 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-11 01:09:38.870975 | orchestrator | 2025-03-11 01:09:38.870990 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-11 01:09:38.871003 | orchestrator | Tuesday 11 March 2025 01:07:46 +0000 (0:00:00.529) 0:01:06.027 ********* 2025-03-11 01:09:38.871017 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:38.871032 | orchestrator | 2025-03-11 01:09:38.871046 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-11 01:09:38.871059 | orchestrator | Tuesday 11 March 2025 01:07:47 +0000 (0:00:00.806) 0:01:06.833 ********* 2025-03-11 01:09:38.871073 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:09:38.871087 | orchestrator | 2025-03-11 01:09:38.871101 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-11 01:09:38.871123 | orchestrator | Tuesday 11 March 2025 01:07:47 +0000 (0:00:00.455) 0:01:07.288 ********* 2025-03-11 01:09:38.871137 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:38.871151 | orchestrator | 2025-03-11 01:09:38.871165 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-11 01:09:38.871194 | orchestrator | Tuesday 11 March 2025 01:07:51 +0000 (0:00:03.071) 0:01:10.360 ********* 2025-03-11 01:09:38.871208 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:09:38.871222 | orchestrator | 2025-03-11 01:09:38.871235 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-11 01:09:38.871249 | orchestrator | 2025-03-11 01:09:38.871263 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-11 01:09:38.871277 | orchestrator | Tuesday 11 March 2025 01:08:49 +0000 (0:00:58.165) 0:02:08.525 ********* 2025-03-11 01:09:38.871291 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:38.871305 | orchestrator | 2025-03-11 01:09:38.871318 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-11 01:09:38.871332 | orchestrator | Tuesday 11 March 2025 01:08:50 +0000 (0:00:01.129) 0:02:09.654 ********* 2025-03-11 01:09:38.871346 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:09:38.871360 | orchestrator | 2025-03-11 01:09:38.871374 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-11 01:09:38.871387 | orchestrator | Tuesday 11 March 2025 01:08:51 +0000 (0:00:00.730) 0:02:10.385 ********* 2025-03-11 01:09:38.871399 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:38.871411 | orchestrator | 2025-03-11 01:09:38.871423 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-11 01:09:38.871436 | orchestrator | Tuesday 11 March 2025 01:08:54 +0000 (0:00:03.560) 0:02:13.945 ********* 2025-03-11 01:09:38.871448 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:09:38.871460 | orchestrator | 2025-03-11 01:09:38.871472 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-11 01:09:38.871485 | orchestrator | 2025-03-11 01:09:38.871497 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-11 01:09:38.871509 | orchestrator | Tuesday 11 March 2025 01:09:11 +0000 (0:00:16.523) 0:02:30.469 ********* 2025-03-11 01:09:38.871521 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:38.871534 | orchestrator | 2025-03-11 01:09:38.871552 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-11 01:09:38.871565 | orchestrator | Tuesday 11 March 2025 01:09:12 +0000 (0:00:01.587) 0:02:32.057 ********* 2025-03-11 01:09:38.871578 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:09:38.871590 | orchestrator | 2025-03-11 01:09:38.871602 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-11 01:09:38.871615 | orchestrator | Tuesday 11 March 2025 01:09:13 +0000 (0:00:01.221) 0:02:33.279 ********* 2025-03-11 01:09:38.871627 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:38.871639 | orchestrator | 2025-03-11 01:09:38.871652 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-11 01:09:38.871664 | orchestrator | Tuesday 11 March 2025 01:09:17 +0000 (0:00:03.155) 0:02:36.434 ********* 2025-03-11 01:09:38.871676 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:09:38.871689 | orchestrator | 2025-03-11 01:09:38.871701 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-03-11 01:09:38.871713 | orchestrator | 2025-03-11 01:09:38.871725 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-03-11 01:09:38.871737 | orchestrator | Tuesday 11 March 2025 01:09:32 +0000 (0:00:15.274) 0:02:51.709 ********* 2025-03-11 01:09:38.871750 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:09:38.871762 | orchestrator | 2025-03-11 01:09:38.871775 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-03-11 01:09:38.871787 | orchestrator | Tuesday 11 March 2025 01:09:33 +0000 (0:00:01.252) 0:02:52.962 ********* 2025-03-11 01:09:38.871799 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-11 01:09:38.871811 | orchestrator | enable_outward_rabbitmq_True 2025-03-11 01:09:38.871824 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-11 01:09:38.871851 | orchestrator | outward_rabbitmq_restart 2025-03-11 01:09:38.871865 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:09:38.871883 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:09:38.871896 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:09:38.871908 | orchestrator | 2025-03-11 01:09:38.871921 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-03-11 01:09:38.871933 | orchestrator | skipping: no hosts matched 2025-03-11 01:09:38.871946 | orchestrator | 2025-03-11 01:09:38.871958 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-03-11 01:09:38.871971 | orchestrator | skipping: no hosts matched 2025-03-11 01:09:38.871983 | orchestrator | 2025-03-11 01:09:38.871996 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-03-11 01:09:38.872008 | orchestrator | skipping: no hosts matched 2025-03-11 01:09:38.872020 | orchestrator | 2025-03-11 01:09:38.872033 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:09:38.872045 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-11 01:09:38.872059 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-11 01:09:38.872072 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:09:38.872084 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-11 01:09:38.872097 | orchestrator | 2025-03-11 01:09:38.872109 | orchestrator | 2025-03-11 01:09:38.872126 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:09:38.872139 | orchestrator | Tuesday 11 March 2025 01:09:36 +0000 (0:00:02.787) 0:02:55.750 ********* 2025-03-11 01:09:38.872151 | orchestrator | =============================================================================== 2025-03-11 01:09:38.872163 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.96s 2025-03-11 01:09:38.872176 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.78s 2025-03-11 01:09:38.872188 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.30s 2025-03-11 01:09:38.872201 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 6.21s 2025-03-11 01:09:38.872213 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 6.10s 2025-03-11 01:09:38.872225 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.75s 2025-03-11 01:09:38.872237 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 4.15s 2025-03-11 01:09:38.872250 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.83s 2025-03-11 01:09:38.872262 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.52s 2025-03-11 01:09:38.872274 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 3.13s 2025-03-11 01:09:38.872287 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.93s 2025-03-11 01:09:38.872299 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.79s 2025-03-11 01:09:38.872311 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.71s 2025-03-11 01:09:38.872323 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 2.41s 2025-03-11 01:09:38.872336 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.38s 2025-03-11 01:09:38.872348 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.21s 2025-03-11 01:09:38.872361 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.79s 2025-03-11 01:09:38.872378 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.71s 2025-03-11 01:09:41.912988 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.61s 2025-03-11 01:09:41.913139 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.52s 2025-03-11 01:09:41.913160 | orchestrator | 2025-03-11 01:09:38 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:41.913192 | orchestrator | 2025-03-11 01:09:41 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:44.950974 | orchestrator | 2025-03-11 01:09:41 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:44.951095 | orchestrator | 2025-03-11 01:09:41 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:44.951113 | orchestrator | 2025-03-11 01:09:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:44.951144 | orchestrator | 2025-03-11 01:09:44 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:47.986153 | orchestrator | 2025-03-11 01:09:44 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:47.986258 | orchestrator | 2025-03-11 01:09:44 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:47.986270 | orchestrator | 2025-03-11 01:09:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:47.986292 | orchestrator | 2025-03-11 01:09:47 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:47.986569 | orchestrator | 2025-03-11 01:09:47 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:47.986586 | orchestrator | 2025-03-11 01:09:47 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:51.042488 | orchestrator | 2025-03-11 01:09:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:51.042611 | orchestrator | 2025-03-11 01:09:51 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:51.043617 | orchestrator | 2025-03-11 01:09:51 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:51.046559 | orchestrator | 2025-03-11 01:09:51 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:54.111958 | orchestrator | 2025-03-11 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:54.112085 | orchestrator | 2025-03-11 01:09:54 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:54.113250 | orchestrator | 2025-03-11 01:09:54 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:54.113302 | orchestrator | 2025-03-11 01:09:54 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:57.161680 | orchestrator | 2025-03-11 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:09:57.161868 | orchestrator | 2025-03-11 01:09:57 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:09:57.163364 | orchestrator | 2025-03-11 01:09:57 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:09:57.164997 | orchestrator | 2025-03-11 01:09:57 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:09:57.166060 | orchestrator | 2025-03-11 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:00.224279 | orchestrator | 2025-03-11 01:10:00 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:03.269918 | orchestrator | 2025-03-11 01:10:00 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:03.270087 | orchestrator | 2025-03-11 01:10:00 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:03.270152 | orchestrator | 2025-03-11 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:03.270188 | orchestrator | 2025-03-11 01:10:03 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:03.270475 | orchestrator | 2025-03-11 01:10:03 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:03.274254 | orchestrator | 2025-03-11 01:10:03 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:06.316396 | orchestrator | 2025-03-11 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:06.316529 | orchestrator | 2025-03-11 01:10:06 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:06.317413 | orchestrator | 2025-03-11 01:10:06 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:06.319304 | orchestrator | 2025-03-11 01:10:06 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:09.383517 | orchestrator | 2025-03-11 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:09.383654 | orchestrator | 2025-03-11 01:10:09 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:09.384242 | orchestrator | 2025-03-11 01:10:09 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:09.384272 | orchestrator | 2025-03-11 01:10:09 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:12.427767 | orchestrator | 2025-03-11 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:12.427951 | orchestrator | 2025-03-11 01:10:12 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:12.428269 | orchestrator | 2025-03-11 01:10:12 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:12.433486 | orchestrator | 2025-03-11 01:10:12 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:15.479234 | orchestrator | 2025-03-11 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:15.479349 | orchestrator | 2025-03-11 01:10:15 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:15.479673 | orchestrator | 2025-03-11 01:10:15 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:15.484078 | orchestrator | 2025-03-11 01:10:15 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:15.484166 | orchestrator | 2025-03-11 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:18.540222 | orchestrator | 2025-03-11 01:10:18 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:18.543455 | orchestrator | 2025-03-11 01:10:18 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:18.544028 | orchestrator | 2025-03-11 01:10:18 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:21.584094 | orchestrator | 2025-03-11 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:21.584229 | orchestrator | 2025-03-11 01:10:21 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:24.639572 | orchestrator | 2025-03-11 01:10:21 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:24.639709 | orchestrator | 2025-03-11 01:10:21 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:24.639738 | orchestrator | 2025-03-11 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:24.639834 | orchestrator | 2025-03-11 01:10:24 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:24.640143 | orchestrator | 2025-03-11 01:10:24 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:24.640743 | orchestrator | 2025-03-11 01:10:24 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:27.683683 | orchestrator | 2025-03-11 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:27.683860 | orchestrator | 2025-03-11 01:10:27 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:27.684062 | orchestrator | 2025-03-11 01:10:27 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:27.684671 | orchestrator | 2025-03-11 01:10:27 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:30.742690 | orchestrator | 2025-03-11 01:10:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:30.742867 | orchestrator | 2025-03-11 01:10:30 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:30.743748 | orchestrator | 2025-03-11 01:10:30 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:30.744749 | orchestrator | 2025-03-11 01:10:30 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:30.744968 | orchestrator | 2025-03-11 01:10:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:33.799453 | orchestrator | 2025-03-11 01:10:33 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:33.804021 | orchestrator | 2025-03-11 01:10:33 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:36.858083 | orchestrator | 2025-03-11 01:10:33 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:36.858198 | orchestrator | 2025-03-11 01:10:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:36.858233 | orchestrator | 2025-03-11 01:10:36 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:36.859975 | orchestrator | 2025-03-11 01:10:36 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:36.864260 | orchestrator | 2025-03-11 01:10:36 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:36.864476 | orchestrator | 2025-03-11 01:10:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:39.917544 | orchestrator | 2025-03-11 01:10:39 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:39.918360 | orchestrator | 2025-03-11 01:10:39 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:39.919517 | orchestrator | 2025-03-11 01:10:39 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:42.972860 | orchestrator | 2025-03-11 01:10:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:42.973125 | orchestrator | 2025-03-11 01:10:42 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:46.026237 | orchestrator | 2025-03-11 01:10:42 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:46.026379 | orchestrator | 2025-03-11 01:10:42 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:46.026399 | orchestrator | 2025-03-11 01:10:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:46.026470 | orchestrator | 2025-03-11 01:10:46 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:46.027134 | orchestrator | 2025-03-11 01:10:46 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:46.031157 | orchestrator | 2025-03-11 01:10:46 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:49.071155 | orchestrator | 2025-03-11 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:49.071290 | orchestrator | 2025-03-11 01:10:49 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state STARTED 2025-03-11 01:10:49.073371 | orchestrator | 2025-03-11 01:10:49 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:49.074153 | orchestrator | 2025-03-11 01:10:49 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:49.074841 | orchestrator | 2025-03-11 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:52.119473 | orchestrator | 2025-03-11 01:10:52 | INFO  | Task 95df0fb0-1bb8-4305-a485-6b3d10d418a8 is in state SUCCESS 2025-03-11 01:10:52.121450 | orchestrator | 2025-03-11 01:10:52.121498 | orchestrator | 2025-03-11 01:10:52.121514 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:10:52.121530 | orchestrator | 2025-03-11 01:10:52.121544 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:10:52.121559 | orchestrator | Tuesday 11 March 2025 01:07:56 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-03-11 01:10:52.121624 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:10:52.121642 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:10:52.121656 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:10:52.121671 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.121685 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.121699 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.122121 | orchestrator | 2025-03-11 01:10:52.122153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:10:52.122180 | orchestrator | Tuesday 11 March 2025 01:07:57 +0000 (0:00:00.983) 0:00:01.241 ********* 2025-03-11 01:10:52.122203 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-03-11 01:10:52.122218 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-03-11 01:10:52.122233 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-03-11 01:10:52.122246 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-03-11 01:10:52.122260 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-03-11 01:10:52.122274 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-03-11 01:10:52.122288 | orchestrator | 2025-03-11 01:10:52.122302 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-03-11 01:10:52.122316 | orchestrator | 2025-03-11 01:10:52.122330 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-03-11 01:10:52.122345 | orchestrator | Tuesday 11 March 2025 01:07:58 +0000 (0:00:01.736) 0:00:02.978 ********* 2025-03-11 01:10:52.122360 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:10:52.122375 | orchestrator | 2025-03-11 01:10:52.122389 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-03-11 01:10:52.122512 | orchestrator | Tuesday 11 March 2025 01:08:00 +0000 (0:00:01.404) 0:00:04.382 ********* 2025-03-11 01:10:52.122535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122655 | orchestrator | 2025-03-11 01:10:52.122682 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-03-11 01:10:52.122697 | orchestrator | Tuesday 11 March 2025 01:08:02 +0000 (0:00:02.162) 0:00:06.545 ********* 2025-03-11 01:10:52.122711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122838 | orchestrator | 2025-03-11 01:10:52.122852 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-03-11 01:10:52.122866 | orchestrator | Tuesday 11 March 2025 01:08:06 +0000 (0:00:03.505) 0:00:10.051 ********* 2025-03-11 01:10:52.122880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.122980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123001 | orchestrator | 2025-03-11 01:10:52.123015 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-03-11 01:10:52.123029 | orchestrator | Tuesday 11 March 2025 01:08:07 +0000 (0:00:01.435) 0:00:11.486 ********* 2025-03-11 01:10:52.123049 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123138 | orchestrator | 2025-03-11 01:10:52.123159 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-03-11 01:10:52.123176 | orchestrator | Tuesday 11 March 2025 01:08:10 +0000 (0:00:02.746) 0:00:14.232 ********* 2025-03-11 01:10:52.123197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.123300 | orchestrator | 2025-03-11 01:10:52.123315 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-03-11 01:10:52.123331 | orchestrator | Tuesday 11 March 2025 01:08:12 +0000 (0:00:02.323) 0:00:16.556 ********* 2025-03-11 01:10:52.123347 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:10:52.123363 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:10:52.123379 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:10:52.123394 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.123410 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.123425 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.123442 | orchestrator | 2025-03-11 01:10:52.123458 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-03-11 01:10:52.123473 | orchestrator | Tuesday 11 March 2025 01:08:16 +0000 (0:00:03.961) 0:00:20.518 ********* 2025-03-11 01:10:52.123488 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-03-11 01:10:52.123502 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-03-11 01:10:52.123516 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-03-11 01:10:52.123530 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-03-11 01:10:52.123544 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-03-11 01:10:52.123558 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-03-11 01:10:52.123571 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:10:52.123585 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:10:52.123604 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:10:52.123619 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:10:52.123640 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:10:52.123654 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-11 01:10:52.123668 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:10:52.123683 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:10:52.123698 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:10:52.123712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:10:52.123727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:10:52.123740 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-11 01:10:52.123754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:10:52.123789 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:10:52.123804 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:10:52.123818 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:10:52.123832 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:10:52.123846 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-11 01:10:52.123860 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:10:52.123874 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:10:52.123888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:10:52.123902 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:10:52.123916 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:10:52.123930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-11 01:10:52.123944 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:10:52.124056 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:10:52.124073 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:10:52.124086 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:10:52.124100 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:10:52.124115 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-11 01:10:52.124129 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-11 01:10:52.124143 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-11 01:10:52.124157 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-11 01:10:52.124180 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-11 01:10:52.124195 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-11 01:10:52.124209 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-11 01:10:52.124223 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-03-11 01:10:52.124238 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-03-11 01:10:52.124259 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-03-11 01:10:52.124274 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-03-11 01:10:52.124288 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-03-11 01:10:52.124303 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-03-11 01:10:52.124317 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-11 01:10:52.124332 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-11 01:10:52.124355 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-11 01:10:52.124380 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-11 01:10:52.124402 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-11 01:10:52.124425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-11 01:10:52.124447 | orchestrator | 2025-03-11 01:10:52.124469 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:10:52.124493 | orchestrator | Tuesday 11 March 2025 01:08:38 +0000 (0:00:22.350) 0:00:42.868 ********* 2025-03-11 01:10:52.124617 | orchestrator | 2025-03-11 01:10:52.124643 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:10:52.124664 | orchestrator | Tuesday 11 March 2025 01:08:39 +0000 (0:00:00.166) 0:00:43.035 ********* 2025-03-11 01:10:52.124678 | orchestrator | 2025-03-11 01:10:52.124692 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:10:52.124706 | orchestrator | Tuesday 11 March 2025 01:08:39 +0000 (0:00:00.126) 0:00:43.161 ********* 2025-03-11 01:10:52.124720 | orchestrator | 2025-03-11 01:10:52.124734 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:10:52.124811 | orchestrator | Tuesday 11 March 2025 01:08:39 +0000 (0:00:00.289) 0:00:43.451 ********* 2025-03-11 01:10:52.124830 | orchestrator | 2025-03-11 01:10:52.124844 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:10:52.124858 | orchestrator | Tuesday 11 March 2025 01:08:39 +0000 (0:00:00.067) 0:00:43.518 ********* 2025-03-11 01:10:52.124873 | orchestrator | 2025-03-11 01:10:52.124887 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-11 01:10:52.124901 | orchestrator | Tuesday 11 March 2025 01:08:39 +0000 (0:00:00.081) 0:00:43.600 ********* 2025-03-11 01:10:52.124915 | orchestrator | 2025-03-11 01:10:52.124929 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-03-11 01:10:52.124943 | orchestrator | Tuesday 11 March 2025 01:08:39 +0000 (0:00:00.064) 0:00:43.665 ********* 2025-03-11 01:10:52.124976 | orchestrator | ok: [testbed-node-5] 2025-03-11 01:10:52.124991 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.125005 | orchestrator | ok: [testbed-node-4] 2025-03-11 01:10:52.125020 | orchestrator | ok: [testbed-node-3] 2025-03-11 01:10:52.125034 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.125047 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.125067 | orchestrator | 2025-03-11 01:10:52.125081 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-03-11 01:10:52.125095 | orchestrator | Tuesday 11 March 2025 01:08:42 +0000 (0:00:02.851) 0:00:46.516 ********* 2025-03-11 01:10:52.125110 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.125124 | orchestrator | changed: [testbed-node-5] 2025-03-11 01:10:52.125138 | orchestrator | changed: [testbed-node-3] 2025-03-11 01:10:52.125152 | orchestrator | changed: [testbed-node-4] 2025-03-11 01:10:52.125166 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.125180 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.125194 | orchestrator | 2025-03-11 01:10:52.125208 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-03-11 01:10:52.125225 | orchestrator | 2025-03-11 01:10:52.125241 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-11 01:10:52.125256 | orchestrator | Tuesday 11 March 2025 01:09:07 +0000 (0:00:25.091) 0:01:11.608 ********* 2025-03-11 01:10:52.125273 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:10:52.125288 | orchestrator | 2025-03-11 01:10:52.125302 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-11 01:10:52.125316 | orchestrator | Tuesday 11 March 2025 01:09:08 +0000 (0:00:01.030) 0:01:12.638 ********* 2025-03-11 01:10:52.125330 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:10:52.125344 | orchestrator | 2025-03-11 01:10:52.125358 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-03-11 01:10:52.125372 | orchestrator | Tuesday 11 March 2025 01:09:09 +0000 (0:00:00.943) 0:01:13.582 ********* 2025-03-11 01:10:52.125385 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.125399 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.125413 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.125427 | orchestrator | 2025-03-11 01:10:52.125441 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-03-11 01:10:52.125455 | orchestrator | Tuesday 11 March 2025 01:09:10 +0000 (0:00:01.273) 0:01:14.856 ********* 2025-03-11 01:10:52.125469 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.125483 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.125497 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.125519 | orchestrator | 2025-03-11 01:10:52.125533 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-03-11 01:10:52.125548 | orchestrator | Tuesday 11 March 2025 01:09:12 +0000 (0:00:01.735) 0:01:16.592 ********* 2025-03-11 01:10:52.125561 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.125575 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.125587 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.125600 | orchestrator | 2025-03-11 01:10:52.125612 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-03-11 01:10:52.125625 | orchestrator | Tuesday 11 March 2025 01:09:15 +0000 (0:00:02.535) 0:01:19.127 ********* 2025-03-11 01:10:52.125637 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.125650 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.125662 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.125675 | orchestrator | 2025-03-11 01:10:52.125687 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-03-11 01:10:52.125700 | orchestrator | Tuesday 11 March 2025 01:09:16 +0000 (0:00:01.401) 0:01:20.528 ********* 2025-03-11 01:10:52.125712 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.125725 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.125743 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.125796 | orchestrator | 2025-03-11 01:10:52.125811 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-03-11 01:10:52.125824 | orchestrator | Tuesday 11 March 2025 01:09:17 +0000 (0:00:00.670) 0:01:21.199 ********* 2025-03-11 01:10:52.125837 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.125849 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.125862 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.125874 | orchestrator | 2025-03-11 01:10:52.125887 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-03-11 01:10:52.125899 | orchestrator | Tuesday 11 March 2025 01:09:17 +0000 (0:00:00.533) 0:01:21.733 ********* 2025-03-11 01:10:52.125912 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.125924 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.125937 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.125950 | orchestrator | 2025-03-11 01:10:52.125962 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-03-11 01:10:52.125975 | orchestrator | Tuesday 11 March 2025 01:09:18 +0000 (0:00:00.508) 0:01:22.242 ********* 2025-03-11 01:10:52.125987 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.125999 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126012 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126055 | orchestrator | 2025-03-11 01:10:52.126068 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-03-11 01:10:52.126081 | orchestrator | Tuesday 11 March 2025 01:09:18 +0000 (0:00:00.434) 0:01:22.676 ********* 2025-03-11 01:10:52.126093 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126106 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126118 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126131 | orchestrator | 2025-03-11 01:10:52.126149 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-03-11 01:10:52.126162 | orchestrator | Tuesday 11 March 2025 01:09:19 +0000 (0:00:00.679) 0:01:23.356 ********* 2025-03-11 01:10:52.126175 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126188 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126200 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126213 | orchestrator | 2025-03-11 01:10:52.126225 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-03-11 01:10:52.126238 | orchestrator | Tuesday 11 March 2025 01:09:20 +0000 (0:00:00.700) 0:01:24.056 ********* 2025-03-11 01:10:52.126250 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126262 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126275 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126287 | orchestrator | 2025-03-11 01:10:52.126300 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-03-11 01:10:52.126312 | orchestrator | Tuesday 11 March 2025 01:09:20 +0000 (0:00:00.548) 0:01:24.605 ********* 2025-03-11 01:10:52.126323 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126333 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126343 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126353 | orchestrator | 2025-03-11 01:10:52.126363 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-03-11 01:10:52.126373 | orchestrator | Tuesday 11 March 2025 01:09:20 +0000 (0:00:00.380) 0:01:24.985 ********* 2025-03-11 01:10:52.126383 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126394 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126404 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126414 | orchestrator | 2025-03-11 01:10:52.126424 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-03-11 01:10:52.126434 | orchestrator | Tuesday 11 March 2025 01:09:21 +0000 (0:00:00.574) 0:01:25.560 ********* 2025-03-11 01:10:52.126444 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126454 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126471 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126481 | orchestrator | 2025-03-11 01:10:52.126491 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-03-11 01:10:52.126502 | orchestrator | Tuesday 11 March 2025 01:09:22 +0000 (0:00:00.474) 0:01:26.034 ********* 2025-03-11 01:10:52.126512 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126522 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126532 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126543 | orchestrator | 2025-03-11 01:10:52.126553 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-03-11 01:10:52.126564 | orchestrator | Tuesday 11 March 2025 01:09:22 +0000 (0:00:00.323) 0:01:26.358 ********* 2025-03-11 01:10:52.126574 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126590 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126602 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126613 | orchestrator | 2025-03-11 01:10:52.126623 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-03-11 01:10:52.126633 | orchestrator | Tuesday 11 March 2025 01:09:22 +0000 (0:00:00.479) 0:01:26.837 ********* 2025-03-11 01:10:52.126643 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126653 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126670 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126680 | orchestrator | 2025-03-11 01:10:52.126691 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-11 01:10:52.126701 | orchestrator | Tuesday 11 March 2025 01:09:23 +0000 (0:00:00.579) 0:01:27.417 ********* 2025-03-11 01:10:52.126711 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:10:52.126722 | orchestrator | 2025-03-11 01:10:52.126732 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-03-11 01:10:52.126742 | orchestrator | Tuesday 11 March 2025 01:09:24 +0000 (0:00:00.945) 0:01:28.362 ********* 2025-03-11 01:10:52.126752 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.126777 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.126788 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.126798 | orchestrator | 2025-03-11 01:10:52.126809 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-03-11 01:10:52.126819 | orchestrator | Tuesday 11 March 2025 01:09:25 +0000 (0:00:00.841) 0:01:29.204 ********* 2025-03-11 01:10:52.126829 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.126840 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.126850 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.126860 | orchestrator | 2025-03-11 01:10:52.126870 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-03-11 01:10:52.126880 | orchestrator | Tuesday 11 March 2025 01:09:26 +0000 (0:00:01.083) 0:01:30.287 ********* 2025-03-11 01:10:52.126891 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126901 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126911 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126921 | orchestrator | 2025-03-11 01:10:52.126932 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-03-11 01:10:52.126942 | orchestrator | Tuesday 11 March 2025 01:09:26 +0000 (0:00:00.514) 0:01:30.801 ********* 2025-03-11 01:10:52.126952 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.126963 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.126973 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.126983 | orchestrator | 2025-03-11 01:10:52.126993 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-03-11 01:10:52.127003 | orchestrator | Tuesday 11 March 2025 01:09:27 +0000 (0:00:01.091) 0:01:31.893 ********* 2025-03-11 01:10:52.127014 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.127024 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.127034 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.127044 | orchestrator | 2025-03-11 01:10:52.127055 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-03-11 01:10:52.127070 | orchestrator | Tuesday 11 March 2025 01:09:29 +0000 (0:00:01.339) 0:01:33.233 ********* 2025-03-11 01:10:52.127081 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.127091 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.127101 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.127111 | orchestrator | 2025-03-11 01:10:52.127122 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-03-11 01:10:52.127136 | orchestrator | Tuesday 11 March 2025 01:09:30 +0000 (0:00:00.882) 0:01:34.115 ********* 2025-03-11 01:10:52.127147 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.127157 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.127167 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.127177 | orchestrator | 2025-03-11 01:10:52.127188 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-03-11 01:10:52.127198 | orchestrator | Tuesday 11 March 2025 01:09:30 +0000 (0:00:00.439) 0:01:34.555 ********* 2025-03-11 01:10:52.127208 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.127218 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.127228 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.127239 | orchestrator | 2025-03-11 01:10:52.127249 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-11 01:10:52.127259 | orchestrator | Tuesday 11 March 2025 01:09:31 +0000 (0:00:00.696) 0:01:35.251 ********* 2025-03-11 01:10:52.127270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127391 | orchestrator | 2025-03-11 01:10:52.127401 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-11 01:10:52.127411 | orchestrator | Tuesday 11 March 2025 01:09:33 +0000 (0:00:02.679) 0:01:37.931 ********* 2025-03-11 01:10:52.127422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127528 | orchestrator | 2025-03-11 01:10:52.127539 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-11 01:10:52.127549 | orchestrator | Tuesday 11 March 2025 01:09:39 +0000 (0:00:05.536) 0:01:43.467 ********* 2025-03-11 01:10:52.127559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.127663 | orchestrator | 2025-03-11 01:10:52.127674 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:10:52.127684 | orchestrator | Tuesday 11 March 2025 01:09:42 +0000 (0:00:03.341) 0:01:46.809 ********* 2025-03-11 01:10:52.127694 | orchestrator | 2025-03-11 01:10:52.127705 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:10:52.127715 | orchestrator | Tuesday 11 March 2025 01:09:42 +0000 (0:00:00.204) 0:01:47.013 ********* 2025-03-11 01:10:52.127725 | orchestrator | 2025-03-11 01:10:52.127735 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:10:52.127746 | orchestrator | Tuesday 11 March 2025 01:09:43 +0000 (0:00:00.588) 0:01:47.601 ********* 2025-03-11 01:10:52.127771 | orchestrator | 2025-03-11 01:10:52.127782 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-11 01:10:52.127792 | orchestrator | Tuesday 11 March 2025 01:09:43 +0000 (0:00:00.158) 0:01:47.760 ********* 2025-03-11 01:10:52.127802 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.127813 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.127823 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.127833 | orchestrator | 2025-03-11 01:10:52.127843 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-11 01:10:52.127854 | orchestrator | Tuesday 11 March 2025 01:09:47 +0000 (0:00:03.965) 0:01:51.725 ********* 2025-03-11 01:10:52.127864 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.127874 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.127884 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.127894 | orchestrator | 2025-03-11 01:10:52.127904 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-11 01:10:52.127915 | orchestrator | Tuesday 11 March 2025 01:09:55 +0000 (0:00:07.903) 0:01:59.628 ********* 2025-03-11 01:10:52.127925 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.127935 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.127945 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.127955 | orchestrator | 2025-03-11 01:10:52.127966 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-11 01:10:52.127976 | orchestrator | Tuesday 11 March 2025 01:09:58 +0000 (0:00:03.302) 0:02:02.930 ********* 2025-03-11 01:10:52.127986 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.127996 | orchestrator | 2025-03-11 01:10:52.128007 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-11 01:10:52.128017 | orchestrator | Tuesday 11 March 2025 01:09:59 +0000 (0:00:00.136) 0:02:03.067 ********* 2025-03-11 01:10:52.128027 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.128037 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.128048 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.128058 | orchestrator | 2025-03-11 01:10:52.128068 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-11 01:10:52.128083 | orchestrator | Tuesday 11 March 2025 01:10:00 +0000 (0:00:01.007) 0:02:04.074 ********* 2025-03-11 01:10:52.128093 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.128104 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.128114 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.128124 | orchestrator | 2025-03-11 01:10:52.128134 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-11 01:10:52.128144 | orchestrator | Tuesday 11 March 2025 01:10:00 +0000 (0:00:00.854) 0:02:04.928 ********* 2025-03-11 01:10:52.128155 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.128165 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.128175 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.128185 | orchestrator | 2025-03-11 01:10:52.128196 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-11 01:10:52.128206 | orchestrator | Tuesday 11 March 2025 01:10:01 +0000 (0:00:00.799) 0:02:05.728 ********* 2025-03-11 01:10:52.128216 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.128227 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.128241 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.128251 | orchestrator | 2025-03-11 01:10:52.128261 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-11 01:10:52.128279 | orchestrator | Tuesday 11 March 2025 01:10:02 +0000 (0:00:00.646) 0:02:06.375 ********* 2025-03-11 01:10:52.128289 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.128300 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.128314 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.128325 | orchestrator | 2025-03-11 01:10:52.128335 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-11 01:10:52.128345 | orchestrator | Tuesday 11 March 2025 01:10:03 +0000 (0:00:01.434) 0:02:07.809 ********* 2025-03-11 01:10:52.128356 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.128366 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.128376 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.128386 | orchestrator | 2025-03-11 01:10:52.128397 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-03-11 01:10:52.128407 | orchestrator | Tuesday 11 March 2025 01:10:05 +0000 (0:00:01.256) 0:02:09.065 ********* 2025-03-11 01:10:52.128417 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.128427 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.128437 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.128448 | orchestrator | 2025-03-11 01:10:52.128458 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-11 01:10:52.128468 | orchestrator | Tuesday 11 March 2025 01:10:05 +0000 (0:00:00.492) 0:02:09.557 ********* 2025-03-11 01:10:52.128478 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128493 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128533 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128544 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128554 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128565 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128580 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128590 | orchestrator | 2025-03-11 01:10:52.128601 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-11 01:10:52.128611 | orchestrator | Tuesday 11 March 2025 01:10:07 +0000 (0:00:01.896) 0:02:11.454 ********* 2025-03-11 01:10:52.128621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128642 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128688 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128723 | orchestrator | 2025-03-11 01:10:52.128734 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-11 01:10:52.128744 | orchestrator | Tuesday 11 March 2025 01:10:12 +0000 (0:00:05.481) 0:02:16.936 ********* 2025-03-11 01:10:52.128793 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128806 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128816 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128827 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128843 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128853 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128863 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128874 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128884 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-11 01:10:52.128895 | orchestrator | 2025-03-11 01:10:52.128905 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:10:52.128915 | orchestrator | Tuesday 11 March 2025 01:10:19 +0000 (0:00:06.569) 0:02:23.505 ********* 2025-03-11 01:10:52.128926 | orchestrator | 2025-03-11 01:10:52.128936 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:10:52.128946 | orchestrator | Tuesday 11 March 2025 01:10:19 +0000 (0:00:00.264) 0:02:23.770 ********* 2025-03-11 01:10:52.128956 | orchestrator | 2025-03-11 01:10:52.128966 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-11 01:10:52.128976 | orchestrator | Tuesday 11 March 2025 01:10:19 +0000 (0:00:00.064) 0:02:23.834 ********* 2025-03-11 01:10:52.128986 | orchestrator | 2025-03-11 01:10:52.128996 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-11 01:10:52.129007 | orchestrator | Tuesday 11 March 2025 01:10:19 +0000 (0:00:00.074) 0:02:23.908 ********* 2025-03-11 01:10:52.129017 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.129027 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.129037 | orchestrator | 2025-03-11 01:10:52.129051 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-11 01:10:52.129062 | orchestrator | Tuesday 11 March 2025 01:10:27 +0000 (0:00:07.500) 0:02:31.409 ********* 2025-03-11 01:10:52.129072 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.129083 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.129093 | orchestrator | 2025-03-11 01:10:52.129103 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-11 01:10:52.129113 | orchestrator | Tuesday 11 March 2025 01:10:34 +0000 (0:00:07.023) 0:02:38.432 ********* 2025-03-11 01:10:52.129124 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:10:52.129134 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:10:52.129149 | orchestrator | 2025-03-11 01:10:52.129159 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-11 01:10:52.129169 | orchestrator | Tuesday 11 March 2025 01:10:41 +0000 (0:00:07.199) 0:02:45.632 ********* 2025-03-11 01:10:52.129180 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:10:52.129190 | orchestrator | 2025-03-11 01:10:52.129200 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-11 01:10:52.129210 | orchestrator | Tuesday 11 March 2025 01:10:41 +0000 (0:00:00.165) 0:02:45.797 ********* 2025-03-11 01:10:52.129221 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.129231 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.129241 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.129251 | orchestrator | 2025-03-11 01:10:52.129262 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-11 01:10:52.129272 | orchestrator | Tuesday 11 March 2025 01:10:43 +0000 (0:00:01.647) 0:02:47.444 ********* 2025-03-11 01:10:52.129282 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.129292 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.129302 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.129313 | orchestrator | 2025-03-11 01:10:52.129321 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-11 01:10:52.129330 | orchestrator | Tuesday 11 March 2025 01:10:44 +0000 (0:00:00.964) 0:02:48.409 ********* 2025-03-11 01:10:52.129339 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.129347 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.129356 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.129365 | orchestrator | 2025-03-11 01:10:52.129373 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-11 01:10:52.129385 | orchestrator | Tuesday 11 March 2025 01:10:45 +0000 (0:00:01.374) 0:02:49.783 ********* 2025-03-11 01:10:52.129394 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:10:52.129403 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:10:52.129411 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:10:52.129420 | orchestrator | 2025-03-11 01:10:52.129429 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-11 01:10:52.129437 | orchestrator | Tuesday 11 March 2025 01:10:46 +0000 (0:00:01.226) 0:02:51.010 ********* 2025-03-11 01:10:52.129446 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.129454 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.129463 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.129472 | orchestrator | 2025-03-11 01:10:52.129480 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-11 01:10:52.129489 | orchestrator | Tuesday 11 March 2025 01:10:48 +0000 (0:00:01.184) 0:02:52.194 ********* 2025-03-11 01:10:52.129497 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:10:52.129506 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:10:52.129515 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:10:52.129523 | orchestrator | 2025-03-11 01:10:52.129532 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:10:52.129540 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-03-11 01:10:52.129550 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-11 01:10:52.129558 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-11 01:10:52.129567 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:10:52.129576 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:10:52.129585 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-11 01:10:52.129598 | orchestrator | 2025-03-11 01:10:52.129607 | orchestrator | 2025-03-11 01:10:52.129615 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:10:52.129624 | orchestrator | Tuesday 11 March 2025 01:10:49 +0000 (0:00:01.727) 0:02:53.921 ********* 2025-03-11 01:10:52.129633 | orchestrator | =============================================================================== 2025-03-11 01:10:52.129641 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.09s 2025-03-11 01:10:52.129650 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.35s 2025-03-11 01:10:52.129658 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.93s 2025-03-11 01:10:52.129667 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 11.47s 2025-03-11 01:10:52.129675 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 10.50s 2025-03-11 01:10:52.129684 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 6.57s 2025-03-11 01:10:52.129693 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.54s 2025-03-11 01:10:52.129705 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.48s 2025-03-11 01:10:55.193313 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.96s 2025-03-11 01:10:55.193427 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.51s 2025-03-11 01:10:55.193445 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.34s 2025-03-11 01:10:55.193460 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.85s 2025-03-11 01:10:55.193474 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.75s 2025-03-11 01:10:55.193488 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.68s 2025-03-11 01:10:55.193503 | orchestrator | ovn-db : Divide hosts by their OVN SB volume availability --------------- 2.54s 2025-03-11 01:10:55.193518 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.32s 2025-03-11 01:10:55.193532 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.16s 2025-03-11 01:10:55.193546 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.90s 2025-03-11 01:10:55.193559 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.74s 2025-03-11 01:10:55.193573 | orchestrator | ovn-db : Divide hosts by their OVN NB volume availability --------------- 1.74s 2025-03-11 01:10:55.193587 | orchestrator | 2025-03-11 01:10:52 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:55.193602 | orchestrator | 2025-03-11 01:10:52 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:55.193616 | orchestrator | 2025-03-11 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:55.193647 | orchestrator | 2025-03-11 01:10:55 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:10:58.241023 | orchestrator | 2025-03-11 01:10:55 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:10:58.241142 | orchestrator | 2025-03-11 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:10:58.241178 | orchestrator | 2025-03-11 01:10:58 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:01.287219 | orchestrator | 2025-03-11 01:10:58 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:01.287335 | orchestrator | 2025-03-11 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:01.287369 | orchestrator | 2025-03-11 01:11:01 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:01.287828 | orchestrator | 2025-03-11 01:11:01 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:01.288002 | orchestrator | 2025-03-11 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:04.340463 | orchestrator | 2025-03-11 01:11:04 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:04.342374 | orchestrator | 2025-03-11 01:11:04 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:04.342419 | orchestrator | 2025-03-11 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:07.395556 | orchestrator | 2025-03-11 01:11:07 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:07.401092 | orchestrator | 2025-03-11 01:11:07 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:10.452659 | orchestrator | 2025-03-11 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:10.452869 | orchestrator | 2025-03-11 01:11:10 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:13.499706 | orchestrator | 2025-03-11 01:11:10 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:13.499852 | orchestrator | 2025-03-11 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:13.499885 | orchestrator | 2025-03-11 01:11:13 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:16.549570 | orchestrator | 2025-03-11 01:11:13 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:16.549689 | orchestrator | 2025-03-11 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:16.549728 | orchestrator | 2025-03-11 01:11:16 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:16.550338 | orchestrator | 2025-03-11 01:11:16 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:19.598439 | orchestrator | 2025-03-11 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:19.598578 | orchestrator | 2025-03-11 01:11:19 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:19.600800 | orchestrator | 2025-03-11 01:11:19 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:22.662316 | orchestrator | 2025-03-11 01:11:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:22.662457 | orchestrator | 2025-03-11 01:11:22 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:22.662952 | orchestrator | 2025-03-11 01:11:22 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:22.663687 | orchestrator | 2025-03-11 01:11:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:25.713810 | orchestrator | 2025-03-11 01:11:25 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:25.714146 | orchestrator | 2025-03-11 01:11:25 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:28.769432 | orchestrator | 2025-03-11 01:11:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:28.769578 | orchestrator | 2025-03-11 01:11:28 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:28.769904 | orchestrator | 2025-03-11 01:11:28 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:31.812204 | orchestrator | 2025-03-11 01:11:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:31.812368 | orchestrator | 2025-03-11 01:11:31 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:31.813170 | orchestrator | 2025-03-11 01:11:31 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:34.870968 | orchestrator | 2025-03-11 01:11:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:34.871097 | orchestrator | 2025-03-11 01:11:34 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:34.871980 | orchestrator | 2025-03-11 01:11:34 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:37.920359 | orchestrator | 2025-03-11 01:11:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:37.920604 | orchestrator | 2025-03-11 01:11:37 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:40.972297 | orchestrator | 2025-03-11 01:11:37 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:40.972416 | orchestrator | 2025-03-11 01:11:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:40.972453 | orchestrator | 2025-03-11 01:11:40 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:40.974948 | orchestrator | 2025-03-11 01:11:40 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:44.020808 | orchestrator | 2025-03-11 01:11:40 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:44.020945 | orchestrator | 2025-03-11 01:11:44 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:44.021211 | orchestrator | 2025-03-11 01:11:44 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:47.082623 | orchestrator | 2025-03-11 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:47.082807 | orchestrator | 2025-03-11 01:11:47 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:50.139554 | orchestrator | 2025-03-11 01:11:47 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:50.139697 | orchestrator | 2025-03-11 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:50.139797 | orchestrator | 2025-03-11 01:11:50 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:50.140515 | orchestrator | 2025-03-11 01:11:50 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:53.180123 | orchestrator | 2025-03-11 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:53.180269 | orchestrator | 2025-03-11 01:11:53 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:56.226770 | orchestrator | 2025-03-11 01:11:53 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:56.226887 | orchestrator | 2025-03-11 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:56.226923 | orchestrator | 2025-03-11 01:11:56 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:56.227403 | orchestrator | 2025-03-11 01:11:56 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:59.278249 | orchestrator | 2025-03-11 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:11:59.278390 | orchestrator | 2025-03-11 01:11:59 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:11:59.278526 | orchestrator | 2025-03-11 01:11:59 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:11:59.278804 | orchestrator | 2025-03-11 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:02.322863 | orchestrator | 2025-03-11 01:12:02 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:02.323629 | orchestrator | 2025-03-11 01:12:02 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:05.377750 | orchestrator | 2025-03-11 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:05.377887 | orchestrator | 2025-03-11 01:12:05 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:05.379253 | orchestrator | 2025-03-11 01:12:05 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:08.446278 | orchestrator | 2025-03-11 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:08.446391 | orchestrator | 2025-03-11 01:12:08 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:08.449016 | orchestrator | 2025-03-11 01:12:08 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:11.484436 | orchestrator | 2025-03-11 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:11.484565 | orchestrator | 2025-03-11 01:12:11 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:11.484953 | orchestrator | 2025-03-11 01:12:11 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:14.537637 | orchestrator | 2025-03-11 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:14.537808 | orchestrator | 2025-03-11 01:12:14 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:14.540485 | orchestrator | 2025-03-11 01:12:14 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:17.590430 | orchestrator | 2025-03-11 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:17.590557 | orchestrator | 2025-03-11 01:12:17 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:20.640780 | orchestrator | 2025-03-11 01:12:17 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:20.640895 | orchestrator | 2025-03-11 01:12:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:20.640930 | orchestrator | 2025-03-11 01:12:20 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:20.643464 | orchestrator | 2025-03-11 01:12:20 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:20.644503 | orchestrator | 2025-03-11 01:12:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:23.690391 | orchestrator | 2025-03-11 01:12:23 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:23.693096 | orchestrator | 2025-03-11 01:12:23 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:23.693140 | orchestrator | 2025-03-11 01:12:23 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:26.745872 | orchestrator | 2025-03-11 01:12:26 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:29.801100 | orchestrator | 2025-03-11 01:12:26 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:29.801222 | orchestrator | 2025-03-11 01:12:26 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:29.801276 | orchestrator | 2025-03-11 01:12:29 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:32.843962 | orchestrator | 2025-03-11 01:12:29 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:32.844072 | orchestrator | 2025-03-11 01:12:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:32.844105 | orchestrator | 2025-03-11 01:12:32 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:35.893767 | orchestrator | 2025-03-11 01:12:32 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:35.893871 | orchestrator | 2025-03-11 01:12:32 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:35.893904 | orchestrator | 2025-03-11 01:12:35 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:38.938106 | orchestrator | 2025-03-11 01:12:35 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:38.938216 | orchestrator | 2025-03-11 01:12:35 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:38.938249 | orchestrator | 2025-03-11 01:12:38 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:41.990645 | orchestrator | 2025-03-11 01:12:38 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:41.990809 | orchestrator | 2025-03-11 01:12:38 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:41.990845 | orchestrator | 2025-03-11 01:12:41 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:41.992327 | orchestrator | 2025-03-11 01:12:41 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:45.059905 | orchestrator | 2025-03-11 01:12:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:45.060036 | orchestrator | 2025-03-11 01:12:45 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:45.061428 | orchestrator | 2025-03-11 01:12:45 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:45.061990 | orchestrator | 2025-03-11 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:48.109092 | orchestrator | 2025-03-11 01:12:48 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:51.165938 | orchestrator | 2025-03-11 01:12:48 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:51.166105 | orchestrator | 2025-03-11 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:51.166144 | orchestrator | 2025-03-11 01:12:51 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:51.169309 | orchestrator | 2025-03-11 01:12:51 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:54.227604 | orchestrator | 2025-03-11 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:54.227782 | orchestrator | 2025-03-11 01:12:54 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:57.270633 | orchestrator | 2025-03-11 01:12:54 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:12:57.270794 | orchestrator | 2025-03-11 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:12:57.270831 | orchestrator | 2025-03-11 01:12:57 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:12:57.272855 | orchestrator | 2025-03-11 01:12:57 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:00.324377 | orchestrator | 2025-03-11 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:00.324533 | orchestrator | 2025-03-11 01:13:00 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:00.324715 | orchestrator | 2025-03-11 01:13:00 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:00.325519 | orchestrator | 2025-03-11 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:03.386446 | orchestrator | 2025-03-11 01:13:03 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:06.421555 | orchestrator | 2025-03-11 01:13:03 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:06.421699 | orchestrator | 2025-03-11 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:06.421737 | orchestrator | 2025-03-11 01:13:06 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:09.470576 | orchestrator | 2025-03-11 01:13:06 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:09.470757 | orchestrator | 2025-03-11 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:09.470808 | orchestrator | 2025-03-11 01:13:09 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:09.471270 | orchestrator | 2025-03-11 01:13:09 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:12.528794 | orchestrator | 2025-03-11 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:12.528934 | orchestrator | 2025-03-11 01:13:12 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:12.530687 | orchestrator | 2025-03-11 01:13:12 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:15.569640 | orchestrator | 2025-03-11 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:15.569830 | orchestrator | 2025-03-11 01:13:15 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:15.570454 | orchestrator | 2025-03-11 01:13:15 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:18.635227 | orchestrator | 2025-03-11 01:13:15 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:18.635369 | orchestrator | 2025-03-11 01:13:18 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:18.636703 | orchestrator | 2025-03-11 01:13:18 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:21.693430 | orchestrator | 2025-03-11 01:13:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:21.693620 | orchestrator | 2025-03-11 01:13:21 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:21.694365 | orchestrator | 2025-03-11 01:13:21 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:24.738425 | orchestrator | 2025-03-11 01:13:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:24.738561 | orchestrator | 2025-03-11 01:13:24 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:27.777239 | orchestrator | 2025-03-11 01:13:24 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:27.777357 | orchestrator | 2025-03-11 01:13:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:27.777393 | orchestrator | 2025-03-11 01:13:27 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:27.779412 | orchestrator | 2025-03-11 01:13:27 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:27.780007 | orchestrator | 2025-03-11 01:13:27 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:30.828924 | orchestrator | 2025-03-11 01:13:30 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:30.829698 | orchestrator | 2025-03-11 01:13:30 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:33.872125 | orchestrator | 2025-03-11 01:13:30 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:33.872262 | orchestrator | 2025-03-11 01:13:33 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:33.874129 | orchestrator | 2025-03-11 01:13:33 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:36.932482 | orchestrator | 2025-03-11 01:13:33 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:36.932606 | orchestrator | 2025-03-11 01:13:36 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:36.933402 | orchestrator | 2025-03-11 01:13:36 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:39.987295 | orchestrator | 2025-03-11 01:13:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:39.987420 | orchestrator | 2025-03-11 01:13:39 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:43.070200 | orchestrator | 2025-03-11 01:13:39 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:43.070325 | orchestrator | 2025-03-11 01:13:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:43.070365 | orchestrator | 2025-03-11 01:13:43 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:46.123955 | orchestrator | 2025-03-11 01:13:43 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:46.124145 | orchestrator | 2025-03-11 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:46.124193 | orchestrator | 2025-03-11 01:13:46 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:46.124280 | orchestrator | 2025-03-11 01:13:46 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:46.124305 | orchestrator | 2025-03-11 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:49.175584 | orchestrator | 2025-03-11 01:13:49 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:49.175858 | orchestrator | 2025-03-11 01:13:49 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:52.233138 | orchestrator | 2025-03-11 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:52.233275 | orchestrator | 2025-03-11 01:13:52 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:52.234441 | orchestrator | 2025-03-11 01:13:52 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:55.283979 | orchestrator | 2025-03-11 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:55.284108 | orchestrator | 2025-03-11 01:13:55 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:55.284692 | orchestrator | 2025-03-11 01:13:55 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:13:58.335788 | orchestrator | 2025-03-11 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:13:58.335930 | orchestrator | 2025-03-11 01:13:58 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:13:58.336130 | orchestrator | 2025-03-11 01:13:58 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:01.388315 | orchestrator | 2025-03-11 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:01.388455 | orchestrator | 2025-03-11 01:14:01 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:01.390213 | orchestrator | 2025-03-11 01:14:01 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:04.441674 | orchestrator | 2025-03-11 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:04.441908 | orchestrator | 2025-03-11 01:14:04 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:07.492298 | orchestrator | 2025-03-11 01:14:04 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:07.492411 | orchestrator | 2025-03-11 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:07.492445 | orchestrator | 2025-03-11 01:14:07 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:07.494428 | orchestrator | 2025-03-11 01:14:07 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:10.542770 | orchestrator | 2025-03-11 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:10.542998 | orchestrator | 2025-03-11 01:14:10 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:13.585251 | orchestrator | 2025-03-11 01:14:10 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:13.585367 | orchestrator | 2025-03-11 01:14:10 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:13.585403 | orchestrator | 2025-03-11 01:14:13 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:13.586934 | orchestrator | 2025-03-11 01:14:13 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:16.645522 | orchestrator | 2025-03-11 01:14:13 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:16.645699 | orchestrator | 2025-03-11 01:14:16 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:19.689049 | orchestrator | 2025-03-11 01:14:16 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:19.689173 | orchestrator | 2025-03-11 01:14:16 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:19.689212 | orchestrator | 2025-03-11 01:14:19 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:19.690070 | orchestrator | 2025-03-11 01:14:19 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:19.690878 | orchestrator | 2025-03-11 01:14:19 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:22.727131 | orchestrator | 2025-03-11 01:14:22 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:22.727780 | orchestrator | 2025-03-11 01:14:22 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:25.765290 | orchestrator | 2025-03-11 01:14:22 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:25.765449 | orchestrator | 2025-03-11 01:14:25 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:25.767316 | orchestrator | 2025-03-11 01:14:25 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:25.767548 | orchestrator | 2025-03-11 01:14:25 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:28.821161 | orchestrator | 2025-03-11 01:14:28 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:31.877967 | orchestrator | 2025-03-11 01:14:28 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:31.878148 | orchestrator | 2025-03-11 01:14:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:31.878187 | orchestrator | 2025-03-11 01:14:31 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:31.878690 | orchestrator | 2025-03-11 01:14:31 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state STARTED 2025-03-11 01:14:31.878890 | orchestrator | 2025-03-11 01:14:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:34.926279 | orchestrator | 2025-03-11 01:14:34 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:34.932170 | orchestrator | 2025-03-11 01:14:34.932221 | orchestrator | 2025-03-11 01:14:34.932238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-11 01:14:34.932253 | orchestrator | 2025-03-11 01:14:34.932269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-11 01:14:34.932285 | orchestrator | Tuesday 11 March 2025 01:06:02 +0000 (0:00:00.280) 0:00:00.280 ********* 2025-03-11 01:14:34.932299 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.932384 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.932404 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.932503 | orchestrator | 2025-03-11 01:14:34.933231 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-11 01:14:34.933576 | orchestrator | Tuesday 11 March 2025 01:06:04 +0000 (0:00:01.077) 0:00:01.358 ********* 2025-03-11 01:14:34.933597 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-03-11 01:14:34.933693 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-03-11 01:14:34.933774 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-03-11 01:14:34.933793 | orchestrator | 2025-03-11 01:14:34.933808 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-03-11 01:14:34.933823 | orchestrator | 2025-03-11 01:14:34.933838 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-11 01:14:34.933853 | orchestrator | Tuesday 11 March 2025 01:06:04 +0000 (0:00:00.823) 0:00:02.181 ********* 2025-03-11 01:14:34.933868 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.933883 | orchestrator | 2025-03-11 01:14:34.933898 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-03-11 01:14:34.933913 | orchestrator | Tuesday 11 March 2025 01:06:06 +0000 (0:00:01.363) 0:00:03.544 ********* 2025-03-11 01:14:34.933927 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.933942 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.933958 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.933973 | orchestrator | 2025-03-11 01:14:34.933988 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-11 01:14:34.934003 | orchestrator | Tuesday 11 March 2025 01:06:07 +0000 (0:00:01.809) 0:00:05.354 ********* 2025-03-11 01:14:34.934060 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.934078 | orchestrator | 2025-03-11 01:14:34.934093 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-03-11 01:14:34.934108 | orchestrator | Tuesday 11 March 2025 01:06:11 +0000 (0:00:03.154) 0:00:08.508 ********* 2025-03-11 01:14:34.934122 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.934137 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.934151 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.934165 | orchestrator | 2025-03-11 01:14:34.934180 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-03-11 01:14:34.934195 | orchestrator | Tuesday 11 March 2025 01:06:12 +0000 (0:00:01.711) 0:00:10.220 ********* 2025-03-11 01:14:34.934236 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:14:34.934261 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:14:34.934279 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:14:34.934295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:14:34.934311 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:14:34.934328 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-11 01:14:34.934344 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-11 01:14:34.934362 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-11 01:14:34.934378 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-11 01:14:34.934394 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-11 01:14:34.934411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-11 01:14:34.934427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-11 01:14:34.934443 | orchestrator | 2025-03-11 01:14:34.934460 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-11 01:14:34.934476 | orchestrator | Tuesday 11 March 2025 01:06:20 +0000 (0:00:07.696) 0:00:17.916 ********* 2025-03-11 01:14:34.934491 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-11 01:14:34.934507 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-11 01:14:34.934523 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-11 01:14:34.934540 | orchestrator | 2025-03-11 01:14:34.934556 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-11 01:14:34.934570 | orchestrator | Tuesday 11 March 2025 01:06:23 +0000 (0:00:03.380) 0:00:21.297 ********* 2025-03-11 01:14:34.934585 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-11 01:14:34.934599 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-11 01:14:34.934635 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-11 01:14:34.934651 | orchestrator | 2025-03-11 01:14:34.934665 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-11 01:14:34.934680 | orchestrator | Tuesday 11 March 2025 01:06:25 +0000 (0:00:01.852) 0:00:23.150 ********* 2025-03-11 01:14:34.934695 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-03-11 01:14:34.934709 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.935410 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-03-11 01:14:34.935440 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.935456 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-03-11 01:14:34.935470 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.935484 | orchestrator | 2025-03-11 01:14:34.935498 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-03-11 01:14:34.935512 | orchestrator | Tuesday 11 March 2025 01:06:26 +0000 (0:00:00.940) 0:00:24.091 ********* 2025-03-11 01:14:34.935528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.935609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.935692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.935707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.935723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.935771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.935789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.935804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.935830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.935846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.935860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.937526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.937543 | orchestrator | 2025-03-11 01:14:34.937556 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-03-11 01:14:34.937569 | orchestrator | Tuesday 11 March 2025 01:06:29 +0000 (0:00:03.050) 0:00:27.141 ********* 2025-03-11 01:14:34.937581 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:14:34.937595 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:14:34.937607 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:14:34.937756 | orchestrator | 2025-03-11 01:14:34.937771 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-03-11 01:14:34.937782 | orchestrator | Tuesday 11 March 2025 01:06:36 +0000 (0:00:06.392) 0:00:33.533 ********* 2025-03-11 01:14:34.937793 | orchestrator | skipping: [testbed-node-1] => (item=users)  2025-03-11 01:14:34.937829 | orchestrator | skipping: [testbed-node-0] => (item=users)  2025-03-11 01:14:34.937937 | orchestrator | skipping: [testbed-node-2] => (item=users)  2025-03-11 01:14:34.937954 | orchestrator | skipping: [testbed-node-0] => (item=rules)  2025-03-11 01:14:34.937964 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.937982 | orchestrator | skipping: [testbed-node-1] => (item=rules)  2025-03-11 01:14:34.938004 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.938043 | orchestrator | skipping: [testbed-node-2] => (item=rules)  2025-03-11 01:14:34.938057 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.938068 | orchestrator | 2025-03-11 01:14:34.938078 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-03-11 01:14:34.938088 | orchestrator | Tuesday 11 March 2025 01:06:39 +0000 (0:00:03.052) 0:00:36.586 ********* 2025-03-11 01:14:34.938098 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:14:34.938109 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:14:34.938119 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:14:34.938129 | orchestrator | 2025-03-11 01:14:34.938140 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-03-11 01:14:34.938150 | orchestrator | Tuesday 11 March 2025 01:06:41 +0000 (0:00:02.365) 0:00:38.952 ********* 2025-03-11 01:14:34.938160 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.938170 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.938180 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.938191 | orchestrator | 2025-03-11 01:14:34.938201 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-03-11 01:14:34.938211 | orchestrator | Tuesday 11 March 2025 01:06:43 +0000 (0:00:02.305) 0:00:41.257 ********* 2025-03-11 01:14:34.938222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.938234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.938245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.938256 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-11 01:14:34.938328 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-11 01:14:34.938352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.938363 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-11 01:14:34.938374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.942156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.942265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942364 | orchestrator | 2025-03-11 01:14:34.942381 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-03-11 01:14:34.942397 | orchestrator | Tuesday 11 March 2025 01:06:48 +0000 (0:00:04.564) 0:00:45.822 ********* 2025-03-11 01:14:34.942412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.942427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.942442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.942457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.942472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.942495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.942531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.942548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.942591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.942607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942680 | orchestrator | 2025-03-11 01:14:34.942695 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-03-11 01:14:34.942709 | orchestrator | Tuesday 11 March 2025 01:06:55 +0000 (0:00:07.009) 0:00:52.831 ********* 2025-03-11 01:14:34.942742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.942758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.942776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.942791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.942807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.942830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.942845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.942879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.942911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.942942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.942964 | orchestrator | 2025-03-11 01:14:34.942979 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-03-11 01:14:34.942994 | orchestrator | Tuesday 11 March 2025 01:06:59 +0000 (0:00:03.666) 0:00:56.497 ********* 2025-03-11 01:14:34.943009 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-11 01:14:34.943024 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-11 01:14:34.943039 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-11 01:14:34.943053 | orchestrator | 2025-03-11 01:14:34.943068 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-03-11 01:14:34.943082 | orchestrator | Tuesday 11 March 2025 01:07:06 +0000 (0:00:07.377) 0:01:03.875 ********* 2025-03-11 01:14:34.943096 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-03-11 01:14:34.943111 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.943126 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-03-11 01:14:34.943141 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.943155 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-03-11 01:14:34.943169 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.943183 | orchestrator | 2025-03-11 01:14:34.943198 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-03-11 01:14:34.943212 | orchestrator | Tuesday 11 March 2025 01:07:09 +0000 (0:00:03.256) 0:01:07.131 ********* 2025-03-11 01:14:34.943226 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.943240 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.943271 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.943286 | orchestrator | 2025-03-11 01:14:34.943302 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-03-11 01:14:34.943316 | orchestrator | Tuesday 11 March 2025 01:07:13 +0000 (0:00:03.335) 0:01:10.466 ********* 2025-03-11 01:14:34.943331 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-11 01:14:34.943346 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-11 01:14:34.943360 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-11 01:14:34.943374 | orchestrator | 2025-03-11 01:14:34.943389 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-03-11 01:14:34.943403 | orchestrator | Tuesday 11 March 2025 01:07:21 +0000 (0:00:08.468) 0:01:18.935 ********* 2025-03-11 01:14:34.943417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-11 01:14:34.943432 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-11 01:14:34.943446 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-11 01:14:34.943461 | orchestrator | 2025-03-11 01:14:34.943475 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-03-11 01:14:34.943489 | orchestrator | Tuesday 11 March 2025 01:07:28 +0000 (0:00:07.002) 0:01:25.937 ********* 2025-03-11 01:14:34.943504 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-03-11 01:14:34.943518 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-03-11 01:14:34.943533 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-03-11 01:14:34.943555 | orchestrator | 2025-03-11 01:14:34.943569 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-03-11 01:14:34.943584 | orchestrator | Tuesday 11 March 2025 01:07:31 +0000 (0:00:02.751) 0:01:28.689 ********* 2025-03-11 01:14:34.943598 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-03-11 01:14:34.943630 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-03-11 01:14:34.943645 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-03-11 01:14:34.943659 | orchestrator | 2025-03-11 01:14:34.943673 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-11 01:14:34.943687 | orchestrator | Tuesday 11 March 2025 01:07:35 +0000 (0:00:04.314) 0:01:33.004 ********* 2025-03-11 01:14:34.943702 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.943716 | orchestrator | 2025-03-11 01:14:34.943730 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-03-11 01:14:34.943745 | orchestrator | Tuesday 11 March 2025 01:07:37 +0000 (0:00:02.283) 0:01:35.287 ********* 2025-03-11 01:14:34.943759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.943775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.943790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.943822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.943838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.943860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.943876 | orchestrator | 2025-03-11 01:14:34.943890 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-03-11 01:14:34.943905 | orchestrator | Tuesday 11 March 2025 01:07:43 +0000 (0:00:05.660) 0:01:40.948 ********* 2025-03-11 01:14:34.943919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.943940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.943955 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.943970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.943985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.943999 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.944033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.944049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.944070 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.944085 | orchestrator | 2025-03-11 01:14:34.944100 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-03-11 01:14:34.944114 | orchestrator | Tuesday 11 March 2025 01:07:44 +0000 (0:00:01.284) 0:01:42.232 ********* 2025-03-11 01:14:34.944129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.944144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.944159 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.944173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.944188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.944203 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.944217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-11 01:14:34.944248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.944270 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.944285 | orchestrator | 2025-03-11 01:14:34.944300 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-03-11 01:14:34.944315 | orchestrator | Tuesday 11 March 2025 01:07:46 +0000 (0:00:01.733) 0:01:43.966 ********* 2025-03-11 01:14:34.944330 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-11 01:14:34.944344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-11 01:14:34.944358 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-11 01:14:34.944372 | orchestrator | 2025-03-11 01:14:34.944387 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-03-11 01:14:34.944401 | orchestrator | Tuesday 11 March 2025 01:07:48 +0000 (0:00:02.375) 0:01:46.341 ********* 2025-03-11 01:14:34.944416 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-03-11 01:14:34.944430 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.944444 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-03-11 01:14:34.944458 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.944473 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-03-11 01:14:34.944487 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.944501 | orchestrator | 2025-03-11 01:14:34.944516 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-03-11 01:14:34.944530 | orchestrator | Tuesday 11 March 2025 01:07:50 +0000 (0:00:01.556) 0:01:47.897 ********* 2025-03-11 01:14:34.944544 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-11 01:14:34.944559 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-11 01:14:34.944573 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-11 01:14:34.944588 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-11 01:14:34.944602 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.944644 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-11 01:14:34.944659 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.944673 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-11 01:14:34.944688 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.944702 | orchestrator | 2025-03-11 01:14:34.944721 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-03-11 01:14:34.944736 | orchestrator | Tuesday 11 March 2025 01:07:55 +0000 (0:00:04.584) 0:01:52.482 ********* 2025-03-11 01:14:34.944752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.944767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.944806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.944823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.944839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-11 01:14:34.944854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-11 01:14:34.944869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.944884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.944905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.944947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-03-11 01:14:34.944964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.944980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91', '__omit_place_holder__ac6e32e28345abd9ff43679223f9bee6e7a5ec91'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-11 01:14:34.944994 | orchestrator | 2025-03-11 01:14:34.945009 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-03-11 01:14:34.945024 | orchestrator | Tuesday 11 March 2025 01:07:58 +0000 (0:00:03.798) 0:01:56.281 ********* 2025-03-11 01:14:34.945038 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.945052 | orchestrator | 2025-03-11 01:14:34.945067 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-03-11 01:14:34.945081 | orchestrator | Tuesday 11 March 2025 01:07:59 +0000 (0:00:00.641) 0:01:56.922 ********* 2025-03-11 01:14:34.945096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-11 01:14:34.945118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.945151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-11 01:14:34.945200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-11 01:14:34.945215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.945237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.945252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name':2025-03-11 01:14:34 | INFO  | Task 530f75f6-cd53-4040-b6dd-d2f3a6fae929 is in state SUCCESS 2025-03-11 01:14:34.945300 | orchestrator | 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945345 | orchestrator | 2025-03-11 01:14:34.945360 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-03-11 01:14:34.945375 | orchestrator | Tuesday 11 March 2025 01:08:06 +0000 (0:00:07.195) 0:02:04.118 ********* 2025-03-11 01:14:34.945389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-11 01:14:34.945417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.945433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945480 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.945496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-11 01:14:34.945511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.945526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945567 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.945582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-11 01:14:34.945661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.945680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.945708 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.945721 | orchestrator | 2025-03-11 01:14:34.945734 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-03-11 01:14:34.945747 | orchestrator | Tuesday 11 March 2025 01:08:08 +0000 (0:00:01.806) 0:02:05.924 ********* 2025-03-11 01:14:34.945760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:14:34.945779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:14:34.945792 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.945806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:14:34.945819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:14:34.945831 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.945844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:14:34.945857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-11 01:14:34.945869 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.945882 | orchestrator | 2025-03-11 01:14:34.945894 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-03-11 01:14:34.945907 | orchestrator | Tuesday 11 March 2025 01:08:10 +0000 (0:00:01.777) 0:02:07.701 ********* 2025-03-11 01:14:34.945920 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.945932 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.945945 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.945958 | orchestrator | 2025-03-11 01:14:34.945970 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-03-11 01:14:34.945983 | orchestrator | Tuesday 11 March 2025 01:08:11 +0000 (0:00:01.034) 0:02:08.736 ********* 2025-03-11 01:14:34.945995 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.946008 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.946047 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.946062 | orchestrator | 2025-03-11 01:14:34.946075 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-03-11 01:14:34.946087 | orchestrator | Tuesday 11 March 2025 01:08:13 +0000 (0:00:02.477) 0:02:11.213 ********* 2025-03-11 01:14:34.946100 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.946112 | orchestrator | 2025-03-11 01:14:34.946125 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-03-11 01:14:34.946137 | orchestrator | Tuesday 11 March 2025 01:08:14 +0000 (0:00:00.856) 0:02:12.070 ********* 2025-03-11 01:14:34.946177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.946193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.946245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.946308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946340 | orchestrator | 2025-03-11 01:14:34.946353 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-03-11 01:14:34.946365 | orchestrator | Tuesday 11 March 2025 01:08:22 +0000 (0:00:07.310) 0:02:19.380 ********* 2025-03-11 01:14:34.946378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.946406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946440 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.946453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.946474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946501 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.946528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.946542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.946575 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.946587 | orchestrator | 2025-03-11 01:14:34.946600 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-03-11 01:14:34.946626 | orchestrator | Tuesday 11 March 2025 01:08:22 +0000 (0:00:00.875) 0:02:20.256 ********* 2025-03-11 01:14:34.946640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:14:34.946653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:14:34.946667 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.946680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:14:34.946698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:14:34.946712 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.946725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:14:34.946737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-11 01:14:34.946750 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.946763 | orchestrator | 2025-03-11 01:14:34.946775 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-03-11 01:14:34.946788 | orchestrator | Tuesday 11 March 2025 01:08:24 +0000 (0:00:01.812) 0:02:22.068 ********* 2025-03-11 01:14:34.946801 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.946813 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.946826 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.946838 | orchestrator | 2025-03-11 01:14:34.946851 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-03-11 01:14:34.946863 | orchestrator | Tuesday 11 March 2025 01:08:25 +0000 (0:00:00.513) 0:02:22.582 ********* 2025-03-11 01:14:34.946876 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.946888 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.946901 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.946913 | orchestrator | 2025-03-11 01:14:34.946926 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-03-11 01:14:34.946944 | orchestrator | Tuesday 11 March 2025 01:08:26 +0000 (0:00:01.757) 0:02:24.340 ********* 2025-03-11 01:14:34.946957 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.946975 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.946987 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.947000 | orchestrator | 2025-03-11 01:14:34.947013 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-03-11 01:14:34.947025 | orchestrator | Tuesday 11 March 2025 01:08:27 +0000 (0:00:00.513) 0:02:24.853 ********* 2025-03-11 01:14:34.947052 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.947066 | orchestrator | 2025-03-11 01:14:34.947083 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-03-11 01:14:34.947097 | orchestrator | Tuesday 11 March 2025 01:08:28 +0000 (0:00:00.965) 0:02:25.819 ********* 2025-03-11 01:14:34.947114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-11 01:14:34.947128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-11 01:14:34.947141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-11 01:14:34.947154 | orchestrator | 2025-03-11 01:14:34.947167 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-03-11 01:14:34.947179 | orchestrator | Tuesday 11 March 2025 01:08:32 +0000 (0:00:04.386) 0:02:30.205 ********* 2025-03-11 01:14:34.947192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-11 01:14:34.947211 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.947244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-11 01:14:34.947259 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.947272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-11 01:14:34.947286 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.947298 | orchestrator | 2025-03-11 01:14:34.947311 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-03-11 01:14:34.947324 | orchestrator | Tuesday 11 March 2025 01:08:36 +0000 (0:00:03.418) 0:02:33.624 ********* 2025-03-11 01:14:34.947337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:14:34.947350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:14:34.947363 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.947376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:14:34.947395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:14:34.947408 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.947421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:14:34.947447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-11 01:14:34.947461 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.947474 | orchestrator | 2025-03-11 01:14:34.947487 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-03-11 01:14:34.947500 | orchestrator | Tuesday 11 March 2025 01:08:38 +0000 (0:00:02.633) 0:02:36.257 ********* 2025-03-11 01:14:34.947512 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.947525 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.947537 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.947550 | orchestrator | 2025-03-11 01:14:34.947562 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-03-11 01:14:34.947575 | orchestrator | Tuesday 11 March 2025 01:08:39 +0000 (0:00:00.675) 0:02:36.933 ********* 2025-03-11 01:14:34.947588 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.947600 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.947651 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.947666 | orchestrator | 2025-03-11 01:14:34.947679 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-03-11 01:14:34.947691 | orchestrator | Tuesday 11 March 2025 01:08:41 +0000 (0:00:02.208) 0:02:39.141 ********* 2025-03-11 01:14:34.947703 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.947716 | orchestrator | 2025-03-11 01:14:34.947729 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-03-11 01:14:34.947741 | orchestrator | Tuesday 11 March 2025 01:08:45 +0000 (0:00:03.403) 0:02:42.545 ********* 2025-03-11 01:14:34.947754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.947767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.947851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.947930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.947973 | orchestrator | 2025-03-11 01:14:34.947983 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-03-11 01:14:34.947994 | orchestrator | Tuesday 11 March 2025 01:08:57 +0000 (0:00:12.647) 0:02:55.192 ********* 2025-03-11 01:14:34.948005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.948015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948070 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.948081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.948097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948145 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.948156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.948167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948205 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.948216 | orchestrator | 2025-03-11 01:14:34.948226 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-03-11 01:14:34.948237 | orchestrator | Tuesday 11 March 2025 01:09:00 +0000 (0:00:02.476) 0:02:57.668 ********* 2025-03-11 01:14:34.948247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:14:34.948258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:14:34.948269 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.948279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:14:34.948302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:14:34.948313 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.948324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:14:34.948335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-11 01:14:34.948346 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.948356 | orchestrator | 2025-03-11 01:14:34.948367 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-03-11 01:14:34.948377 | orchestrator | Tuesday 11 March 2025 01:09:02 +0000 (0:00:02.289) 0:02:59.958 ********* 2025-03-11 01:14:34.948392 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.948403 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.948413 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.948423 | orchestrator | 2025-03-11 01:14:34.948433 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-03-11 01:14:34.948443 | orchestrator | Tuesday 11 March 2025 01:09:03 +0000 (0:00:00.937) 0:03:00.896 ********* 2025-03-11 01:14:34.948454 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.948464 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.948474 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.948484 | orchestrator | 2025-03-11 01:14:34.948495 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-03-11 01:14:34.948505 | orchestrator | Tuesday 11 March 2025 01:09:06 +0000 (0:00:02.593) 0:03:03.489 ********* 2025-03-11 01:14:34.948515 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.948525 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.948536 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.948546 | orchestrator | 2025-03-11 01:14:34.948556 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-03-11 01:14:34.948566 | orchestrator | Tuesday 11 March 2025 01:09:06 +0000 (0:00:00.466) 0:03:03.955 ********* 2025-03-11 01:14:34.948577 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.948587 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.948597 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.948607 | orchestrator | 2025-03-11 01:14:34.948631 | orchestrator | TASK [include_role : designate] ************************************************ 2025-03-11 01:14:34.948642 | orchestrator | Tuesday 11 March 2025 01:09:07 +0000 (0:00:00.850) 0:03:04.806 ********* 2025-03-11 01:14:34.948652 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.948662 | orchestrator | 2025-03-11 01:14:34.948673 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-03-11 01:14:34.948683 | orchestrator | Tuesday 11 March 2025 01:09:09 +0000 (0:00:01.689) 0:03:06.495 ********* 2025-03-11 01:14:34.948694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-11 01:14:34.948705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:14:34.948729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-11 01:14:34.948823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:14:34.948855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-11 01:14:34.948950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:14:34.948962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.948992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949024 | orchestrator | 2025-03-11 01:14:34.949035 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-03-11 01:14:34.949045 | orchestrator | Tuesday 11 March 2025 01:09:18 +0000 (0:00:09.536) 0:03:16.031 ********* 2025-03-11 01:14:34.949068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-11 01:14:34.949094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:14:34.949106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949165 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.949199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-11 01:14:34.949212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-11 01:14:34.949223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:14:34.949234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-11 01:14:34.949245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949381 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.949411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.949422 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.949433 | orchestrator | 2025-03-11 01:14:34.949444 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-03-11 01:14:34.949455 | orchestrator | Tuesday 11 March 2025 01:09:20 +0000 (0:00:01.401) 0:03:17.432 ********* 2025-03-11 01:14:34.949465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:14:34.949476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:14:34.949486 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.949497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:14:34.949508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:14:34.949518 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.949529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:14:34.949539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-11 01:14:34.949549 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.949560 | orchestrator | 2025-03-11 01:14:34.949570 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-03-11 01:14:34.949581 | orchestrator | Tuesday 11 March 2025 01:09:22 +0000 (0:00:01.960) 0:03:19.393 ********* 2025-03-11 01:14:34.949591 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.949601 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.949625 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.949642 | orchestrator | 2025-03-11 01:14:34.949653 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-03-11 01:14:34.949663 | orchestrator | Tuesday 11 March 2025 01:09:22 +0000 (0:00:00.360) 0:03:19.754 ********* 2025-03-11 01:14:34.949673 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.949684 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.949694 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.949704 | orchestrator | 2025-03-11 01:14:34.949714 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-03-11 01:14:34.949725 | orchestrator | Tuesday 11 March 2025 01:09:24 +0000 (0:00:01.924) 0:03:21.678 ********* 2025-03-11 01:14:34.949735 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.949745 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.949756 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.949766 | orchestrator | 2025-03-11 01:14:34.949776 | orchestrator | TASK [include_role : glance] *************************************************** 2025-03-11 01:14:34.949786 | orchestrator | Tuesday 11 March 2025 01:09:24 +0000 (0:00:00.655) 0:03:22.334 ********* 2025-03-11 01:14:34.949797 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.949807 | orchestrator | 2025-03-11 01:14:34.949818 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-03-11 01:14:34.949828 | orchestrator | Tuesday 11 March 2025 01:09:26 +0000 (0:00:01.414) 0:03:23.749 ********* 2025-03-11 01:14:34.949860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-11 01:14:34.949875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.949912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-11 01:14:34.949924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.949948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-11 01:14:34.949979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.949997 | orchestrator | 2025-03-11 01:14:34.950008 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-03-11 01:14:34.950037 | orchestrator | Tuesday 11 March 2025 01:09:36 +0000 (0:00:10.273) 0:03:34.022 ********* 2025-03-11 01:14:34.950050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-11 01:14:34.950084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.950097 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.950114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-11 01:14:34.950138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.950159 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.950171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-11 01:14:34.950207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.950220 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.950231 | orchestrator | 2025-03-11 01:14:34.950242 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-03-11 01:14:34.950252 | orchestrator | Tuesday 11 March 2025 01:09:42 +0000 (0:00:05.440) 0:03:39.462 ********* 2025-03-11 01:14:34.950262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:14:34.950279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:14:34.950290 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.950301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:14:34.950312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:14:34.950323 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.950340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:14:34.950352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-11 01:14:34.950362 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.950374 | orchestrator | 2025-03-11 01:14:34.950385 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-03-11 01:14:34.950395 | orchestrator | Tuesday 11 March 2025 01:09:48 +0000 (0:00:06.836) 0:03:46.299 ********* 2025-03-11 01:14:34.950406 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.950417 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.950427 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.950437 | orchestrator | 2025-03-11 01:14:34.950448 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-03-11 01:14:34.950470 | orchestrator | Tuesday 11 March 2025 01:09:49 +0000 (0:00:00.588) 0:03:46.888 ********* 2025-03-11 01:14:34.950482 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.950492 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.950503 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.950513 | orchestrator | 2025-03-11 01:14:34.950524 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-03-11 01:14:34.950534 | orchestrator | Tuesday 11 March 2025 01:09:51 +0000 (0:00:01.524) 0:03:48.412 ********* 2025-03-11 01:14:34.950551 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.950561 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.950572 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.950582 | orchestrator | 2025-03-11 01:14:34.950593 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-03-11 01:14:34.950603 | orchestrator | Tuesday 11 March 2025 01:09:51 +0000 (0:00:00.496) 0:03:48.909 ********* 2025-03-11 01:14:34.950649 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.950661 | orchestrator | 2025-03-11 01:14:34.950671 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-03-11 01:14:34.950681 | orchestrator | Tuesday 11 March 2025 01:09:52 +0000 (0:00:01.049) 0:03:49.959 ********* 2025-03-11 01:14:34.950692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-11 01:14:34.950703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-11 01:14:34.950715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-11 01:14:34.950725 | orchestrator | 2025-03-11 01:14:34.950736 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-03-11 01:14:34.950746 | orchestrator | Tuesday 11 March 2025 01:09:56 +0000 (0:00:04.365) 0:03:54.325 ********* 2025-03-11 01:14:34.950757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-11 01:14:34.950767 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.950800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-11 01:14:34.950819 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.950830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-11 01:14:34.950841 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.950852 | orchestrator | 2025-03-11 01:14:34.950862 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-03-11 01:14:34.950873 | orchestrator | Tuesday 11 March 2025 01:09:57 +0000 (0:00:00.614) 0:03:54.940 ********* 2025-03-11 01:14:34.950883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:14:34.950897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:14:34.950908 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.950918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:14:34.950929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:14:34.950939 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.950950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:14:34.950960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-11 01:14:34.950970 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.950981 | orchestrator | 2025-03-11 01:14:34.950995 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-03-11 01:14:34.951005 | orchestrator | Tuesday 11 March 2025 01:09:58 +0000 (0:00:00.863) 0:03:55.803 ********* 2025-03-11 01:14:34.951016 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.951026 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.951036 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.951047 | orchestrator | 2025-03-11 01:14:34.951057 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-03-11 01:14:34.951067 | orchestrator | Tuesday 11 March 2025 01:09:58 +0000 (0:00:00.515) 0:03:56.318 ********* 2025-03-11 01:14:34.951077 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.951087 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.951102 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.951113 | orchestrator | 2025-03-11 01:14:34.951123 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-03-11 01:14:34.951133 | orchestrator | Tuesday 11 March 2025 01:10:00 +0000 (0:00:01.568) 0:03:57.887 ********* 2025-03-11 01:14:34.951143 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.951153 | orchestrator | 2025-03-11 01:14:34.951164 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-03-11 01:14:34.951173 | orchestrator | Tuesday 11 March 2025 01:10:01 +0000 (0:00:01.314) 0:03:59.201 ********* 2025-03-11 01:14:34.951194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.951204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.951214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.951223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.951244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.951265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.951275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.951284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.951300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.951310 | orchestrator | 2025-03-11 01:14:34.951319 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-03-11 01:14:34.951328 | orchestrator | Tuesday 11 March 2025 01:10:11 +0000 (0:00:10.003) 0:04:09.205 ********* 2025-03-11 01:14:34.951337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.951362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.951372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.951381 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.951390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.951409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.951423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.951432 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.951441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.951462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.951472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.951481 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.951490 | orchestrator | 2025-03-11 01:14:34.951499 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-03-11 01:14:34.951508 | orchestrator | Tuesday 11 March 2025 01:10:13 +0000 (0:00:01.605) 0:04:10.811 ********* 2025-03-11 01:14:34.951517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951558 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.951567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951605 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.951625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-11 01:14:34.951673 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.951683 | orchestrator | 2025-03-11 01:14:34.951761 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-03-11 01:14:34.951771 | orchestrator | Tuesday 11 March 2025 01:10:14 +0000 (0:00:01.523) 0:04:12.334 ********* 2025-03-11 01:14:34.951780 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.951789 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.951797 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.951806 | orchestrator | 2025-03-11 01:14:34.951815 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-03-11 01:14:34.951823 | orchestrator | Tuesday 11 March 2025 01:10:15 +0000 (0:00:00.567) 0:04:12.902 ********* 2025-03-11 01:14:34.951832 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.951841 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.951850 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.951858 | orchestrator | 2025-03-11 01:14:34.951867 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-03-11 01:14:34.951876 | orchestrator | Tuesday 11 March 2025 01:10:17 +0000 (0:00:01.526) 0:04:14.428 ********* 2025-03-11 01:14:34.951884 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.951893 | orchestrator | 2025-03-11 01:14:34.951902 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-03-11 01:14:34.951910 | orchestrator | Tuesday 11 March 2025 01:10:18 +0000 (0:00:01.186) 0:04:15.615 ********* 2025-03-11 01:14:34.951924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-11 01:14:34.951949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-11 01:14:34.951964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-11 01:14:34.951973 | orchestrator | 2025-03-11 01:14:34.951982 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-03-11 01:14:34.951991 | orchestrator | Tuesday 11 March 2025 01:10:23 +0000 (0:00:04.990) 0:04:20.605 ********* 2025-03-11 01:14:34.952017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-11 01:14:34.952032 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-11 01:14:34.952051 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-11 01:14:34.952088 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.952097 | orchestrator | 2025-03-11 01:14:34.952106 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-03-11 01:14:34.952114 | orchestrator | Tuesday 11 March 2025 01:10:24 +0000 (0:00:01.129) 0:04:21.735 ********* 2025-03-11 01:14:34.952123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:14:34.952132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:14:34.952142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:14:34.952152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:14:34.952161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-11 01:14:34.952170 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:14:34.952203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:14:34.952212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:14:34.952226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:14:34.952235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-11 01:14:34.952245 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:14:34.952263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:14:34.952271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-11 01:14:34.952280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-11 01:14:34.952289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-11 01:14:34.952298 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.952307 | orchestrator | 2025-03-11 01:14:34.952316 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-03-11 01:14:34.952325 | orchestrator | Tuesday 11 March 2025 01:10:25 +0000 (0:00:01.520) 0:04:23.255 ********* 2025-03-11 01:14:34.952333 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952342 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952351 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.952359 | orchestrator | 2025-03-11 01:14:34.952368 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-03-11 01:14:34.952377 | orchestrator | Tuesday 11 March 2025 01:10:26 +0000 (0:00:00.544) 0:04:23.800 ********* 2025-03-11 01:14:34.952385 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952394 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952402 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.952411 | orchestrator | 2025-03-11 01:14:34.952420 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-03-11 01:14:34.952429 | orchestrator | Tuesday 11 March 2025 01:10:28 +0000 (0:00:01.952) 0:04:25.753 ********* 2025-03-11 01:14:34.952437 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952446 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952455 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.952463 | orchestrator | 2025-03-11 01:14:34.952472 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-03-11 01:14:34.952488 | orchestrator | Tuesday 11 March 2025 01:10:28 +0000 (0:00:00.415) 0:04:26.168 ********* 2025-03-11 01:14:34.952497 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952509 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952518 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.952527 | orchestrator | 2025-03-11 01:14:34.952535 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-03-11 01:14:34.952544 | orchestrator | Tuesday 11 March 2025 01:10:29 +0000 (0:00:00.604) 0:04:26.773 ********* 2025-03-11 01:14:34.952553 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.952572 | orchestrator | 2025-03-11 01:14:34.952582 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-03-11 01:14:34.952591 | orchestrator | Tuesday 11 March 2025 01:10:30 +0000 (0:00:01.475) 0:04:28.248 ********* 2025-03-11 01:14:34.952600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-03-11 01:14:34.952621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:14:34.952631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:14:34.952641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-03-11 01:14:34.952657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:14:34.952678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:14:34.952688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-03-11 01:14:34.952698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:14:34.952707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:14:34.952717 | orchestrator | 2025-03-11 01:14:34.952726 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-03-11 01:14:34.952735 | orchestrator | Tuesday 11 March 2025 01:10:35 +0000 (0:00:05.003) 0:04:33.251 ********* 2025-03-11 01:14:34.952755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-03-11 01:14:34.952776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:14:34.952786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:14:34.952795 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-03-11 01:14:34.952814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:14:34.952824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:14:34.952837 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-03-11 01:14:34.952867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-11 01:14:34.952876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-11 01:14:34.952885 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.952895 | orchestrator | 2025-03-11 01:14:34.952904 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-03-11 01:14:34.952913 | orchestrator | Tuesday 11 March 2025 01:10:36 +0000 (0:00:00.691) 0:04:33.943 ********* 2025-03-11 01:14:34.952922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-11 01:14:34.952933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-11 01:14:34.952942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-11 01:14:34.952956 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.952965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-11 01:14:34.952974 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.952982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-11 01:14:34.952991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-03-11 01:14:34.953000 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953009 | orchestrator | 2025-03-11 01:14:34.953018 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-03-11 01:14:34.953026 | orchestrator | Tuesday 11 March 2025 01:10:37 +0000 (0:00:01.384) 0:04:35.327 ********* 2025-03-11 01:14:34.953035 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.953044 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.953053 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953062 | orchestrator | 2025-03-11 01:14:34.953070 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-03-11 01:14:34.953079 | orchestrator | Tuesday 11 March 2025 01:10:38 +0000 (0:00:00.376) 0:04:35.704 ********* 2025-03-11 01:14:34.953088 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.953097 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.953105 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953114 | orchestrator | 2025-03-11 01:14:34.953123 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-03-11 01:14:34.953142 | orchestrator | Tuesday 11 March 2025 01:10:39 +0000 (0:00:01.578) 0:04:37.282 ********* 2025-03-11 01:14:34.953152 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.953161 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.953170 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953179 | orchestrator | 2025-03-11 01:14:34.953188 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-03-11 01:14:34.953197 | orchestrator | Tuesday 11 March 2025 01:10:40 +0000 (0:00:00.562) 0:04:37.844 ********* 2025-03-11 01:14:34.953206 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.953214 | orchestrator | 2025-03-11 01:14:34.953223 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-03-11 01:14:34.953232 | orchestrator | Tuesday 11 March 2025 01:10:42 +0000 (0:00:01.610) 0:04:39.455 ********* 2025-03-11 01:14:34.953241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-11 01:14:34.953255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-11 01:14:34.953274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-11 01:14:34.953308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953321 | orchestrator | 2025-03-11 01:14:34.953331 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-03-11 01:14:34.953340 | orchestrator | Tuesday 11 March 2025 01:10:48 +0000 (0:00:06.810) 0:04:46.265 ********* 2025-03-11 01:14:34.953349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-11 01:14:34.953358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953367 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.953387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-11 01:14:34.953397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953406 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.953415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-11 01:14:34.953428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953438 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953446 | orchestrator | 2025-03-11 01:14:34.953455 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-03-11 01:14:34.953464 | orchestrator | Tuesday 11 March 2025 01:10:50 +0000 (0:00:01.285) 0:04:47.550 ********* 2025-03-11 01:14:34.953473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:14:34.953482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:14:34.953491 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.953500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:14:34.953508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:14:34.953517 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.953526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:14:34.953535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-11 01:14:34.953544 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953552 | orchestrator | 2025-03-11 01:14:34.953561 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-03-11 01:14:34.953570 | orchestrator | Tuesday 11 March 2025 01:10:51 +0000 (0:00:01.444) 0:04:48.995 ********* 2025-03-11 01:14:34.953590 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.953600 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.953609 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953650 | orchestrator | 2025-03-11 01:14:34.953659 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-03-11 01:14:34.953668 | orchestrator | Tuesday 11 March 2025 01:10:51 +0000 (0:00:00.342) 0:04:49.338 ********* 2025-03-11 01:14:34.953677 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.953685 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.953699 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.953708 | orchestrator | 2025-03-11 01:14:34.953717 | orchestrator | TASK [include_role : manila] *************************************************** 2025-03-11 01:14:34.953726 | orchestrator | Tuesday 11 March 2025 01:10:53 +0000 (0:00:01.636) 0:04:50.974 ********* 2025-03-11 01:14:34.953735 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.953743 | orchestrator | 2025-03-11 01:14:34.953752 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-03-11 01:14:34.953761 | orchestrator | Tuesday 11 March 2025 01:10:55 +0000 (0:00:01.552) 0:04:52.526 ********* 2025-03-11 01:14:34.953770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-11 01:14:34.953779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-11 01:14:34.953788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-11 01:14:34.953849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953919 | orchestrator | 2025-03-11 01:14:34.953928 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-03-11 01:14:34.953938 | orchestrator | Tuesday 11 March 2025 01:11:00 +0000 (0:00:05.190) 0:04:57.717 ********* 2025-03-11 01:14:34.953947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-11 01:14:34.953956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.953983 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-11 01:14:34.954056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.954067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.954076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.954084 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-11 01:14:34.954101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.954125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.954135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.954144 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954153 | orchestrator | 2025-03-11 01:14:34.954161 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-03-11 01:14:34.954169 | orchestrator | Tuesday 11 March 2025 01:11:01 +0000 (0:00:01.114) 0:04:58.831 ********* 2025-03-11 01:14:34.954178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:14:34.954186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:14:34.954194 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:14:34.954211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:14:34.954219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:14:34.954227 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-11 01:14:34.954243 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954251 | orchestrator | 2025-03-11 01:14:34.954259 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-03-11 01:14:34.954267 | orchestrator | Tuesday 11 March 2025 01:11:02 +0000 (0:00:01.459) 0:05:00.291 ********* 2025-03-11 01:14:34.954275 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954283 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954291 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954299 | orchestrator | 2025-03-11 01:14:34.954307 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-03-11 01:14:34.954315 | orchestrator | Tuesday 11 March 2025 01:11:03 +0000 (0:00:00.412) 0:05:00.703 ********* 2025-03-11 01:14:34.954323 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954331 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954342 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954350 | orchestrator | 2025-03-11 01:14:34.954359 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-03-11 01:14:34.954367 | orchestrator | Tuesday 11 March 2025 01:11:04 +0000 (0:00:01.591) 0:05:02.295 ********* 2025-03-11 01:14:34.954375 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.954383 | orchestrator | 2025-03-11 01:14:34.954391 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-03-11 01:14:34.954399 | orchestrator | Tuesday 11 March 2025 01:11:06 +0000 (0:00:01.639) 0:05:03.934 ********* 2025-03-11 01:14:34.954407 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-11 01:14:34.954415 | orchestrator | 2025-03-11 01:14:34.954423 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-03-11 01:14:34.954431 | orchestrator | Tuesday 11 March 2025 01:11:10 +0000 (0:00:03.579) 0:05:07.514 ********* 2025-03-11 01:14:34.954451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-11 01:14:34.954461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:14:34.954471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-11 01:14:34.954495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:14:34.954517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-03-11 01:14:34.954533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:14:34.954546 | orchestrator | 2025-03-11 01:14:34.954554 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-03-11 01:14:34.954563 | orchestrator | Tuesday 11 March 2025 01:11:14 +0000 (0:00:04.780) 0:05:12.294 ********* 2025-03-11 01:14:34.954583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-11 01:14:34.954599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:14:34.954608 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-11 01:14:34.954643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:14:34.954652 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-11 01:14:34.954687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-11 01:14:34.954699 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954708 | orchestrator | 2025-03-11 01:14:34.954716 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-03-11 01:14:34.954725 | orchestrator | Tuesday 11 March 2025 01:11:18 +0000 (0:00:03.129) 0:05:15.424 ********* 2025-03-11 01:14:34.954733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:14:34.954746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:14:34.954754 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:14:34.954782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:14:34.954791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:14:34.954800 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-11 01:14:34.954817 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954825 | orchestrator | 2025-03-11 01:14:34.954833 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-03-11 01:14:34.954841 | orchestrator | Tuesday 11 March 2025 01:11:21 +0000 (0:00:03.908) 0:05:19.332 ********* 2025-03-11 01:14:34.954853 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954862 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954870 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954878 | orchestrator | 2025-03-11 01:14:34.954886 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-03-11 01:14:34.954894 | orchestrator | Tuesday 11 March 2025 01:11:22 +0000 (0:00:00.571) 0:05:19.904 ********* 2025-03-11 01:14:34.954902 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954914 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954922 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954930 | orchestrator | 2025-03-11 01:14:34.954938 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-03-11 01:14:34.954946 | orchestrator | Tuesday 11 March 2025 01:11:24 +0000 (0:00:01.708) 0:05:21.612 ********* 2025-03-11 01:14:34.954954 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.954962 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.954970 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.954979 | orchestrator | 2025-03-11 01:14:34.954987 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-03-11 01:14:34.954995 | orchestrator | Tuesday 11 March 2025 01:11:24 +0000 (0:00:00.354) 0:05:21.967 ********* 2025-03-11 01:14:34.955003 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.955011 | orchestrator | 2025-03-11 01:14:34.955019 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-03-11 01:14:34.955027 | orchestrator | Tuesday 11 March 2025 01:11:26 +0000 (0:00:01.717) 0:05:23.684 ********* 2025-03-11 01:14:34.955035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-11 01:14:34.955049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-11 01:14:34.955068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-11 01:14:34.955082 | orchestrator | 2025-03-11 01:14:34.955090 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-03-11 01:14:34.955099 | orchestrator | Tuesday 11 March 2025 01:11:28 +0000 (0:00:01.825) 0:05:25.509 ********* 2025-03-11 01:14:34.955107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-11 01:14:34.955115 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.955124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-11 01:14:34.955132 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.955141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-11 01:14:34.955149 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.955157 | orchestrator | 2025-03-11 01:14:34.955165 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-03-11 01:14:34.955173 | orchestrator | Tuesday 11 March 2025 01:11:29 +0000 (0:00:00.864) 0:05:26.374 ********* 2025-03-11 01:14:34.955181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-11 01:14:34.955189 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.955198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-11 01:14:34.955206 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.955223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-11 01:14:34.955238 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.955246 | orchestrator | 2025-03-11 01:14:34.955254 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-03-11 01:14:34.955262 | orchestrator | Tuesday 11 March 2025 01:11:29 +0000 (0:00:00.835) 0:05:27.210 ********* 2025-03-11 01:14:34.955270 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.955279 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.955287 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.955295 | orchestrator | 2025-03-11 01:14:34.955303 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-03-11 01:14:34.955311 | orchestrator | Tuesday 11 March 2025 01:11:30 +0000 (0:00:00.624) 0:05:27.834 ********* 2025-03-11 01:14:34.955319 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.955327 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.955335 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.955343 | orchestrator | 2025-03-11 01:14:34.955351 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-03-11 01:14:34.955359 | orchestrator | Tuesday 11 March 2025 01:11:32 +0000 (0:00:01.644) 0:05:29.478 ********* 2025-03-11 01:14:34.955367 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.955375 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.955383 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.955391 | orchestrator | 2025-03-11 01:14:34.955399 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-03-11 01:14:34.955407 | orchestrator | Tuesday 11 March 2025 01:11:32 +0000 (0:00:00.370) 0:05:29.849 ********* 2025-03-11 01:14:34.955415 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.955423 | orchestrator | 2025-03-11 01:14:34.955431 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-03-11 01:14:34.955439 | orchestrator | Tuesday 11 March 2025 01:11:34 +0000 (0:00:01.700) 0:05:31.550 ********* 2025-03-11 01:14:34.955447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-11 01:14:34.955456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:14:34.955539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.955605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.955636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-11 01:14:34.955676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-11 01:14:34.955685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.955694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.955718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:14:34.955814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:14:34.955823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.955910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.955936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.955956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.955975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.955990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.955999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.956029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.956080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956104 | orchestrator | 2025-03-11 01:14:34.956112 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-03-11 01:14:34.956120 | orchestrator | Tuesday 11 March 2025 01:11:39 +0000 (0:00:05.651) 0:05:37.202 ********* 2025-03-11 01:14:34.956139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-11 01:14:34.956154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-11 01:14:34.956163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:14:34.956229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:14:34.956283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.956376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.956439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.956484 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.956492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.956534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-11 01:14:34.956556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956564 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.956573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-11 01:14:34.956710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.956799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-11 01:14:34.956807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-11 01:14:34.956840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-11 01:14:34.956849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.956862 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.956870 | orchestrator | 2025-03-11 01:14:34.956878 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-03-11 01:14:34.956887 | orchestrator | Tuesday 11 March 2025 01:11:42 +0000 (0:00:02.391) 0:05:39.593 ********* 2025-03-11 01:14:34.956895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:14:34.956903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:14:34.956911 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.956923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:14:34.956931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:14:34.956939 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.956948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:14:34.956956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-11 01:14:34.956964 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.956972 | orchestrator | 2025-03-11 01:14:34.956980 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-03-11 01:14:34.956988 | orchestrator | Tuesday 11 March 2025 01:11:45 +0000 (0:00:02.776) 0:05:42.370 ********* 2025-03-11 01:14:34.956996 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957005 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957013 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.957021 | orchestrator | 2025-03-11 01:14:34.957029 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-03-11 01:14:34.957037 | orchestrator | Tuesday 11 March 2025 01:11:45 +0000 (0:00:00.591) 0:05:42.961 ********* 2025-03-11 01:14:34.957045 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957053 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957061 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.957069 | orchestrator | 2025-03-11 01:14:34.957077 | orchestrator | TASK [include_role : placement] ************************************************ 2025-03-11 01:14:34.957085 | orchestrator | Tuesday 11 March 2025 01:11:47 +0000 (0:00:01.705) 0:05:44.667 ********* 2025-03-11 01:14:34.957093 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.957101 | orchestrator | 2025-03-11 01:14:34.957109 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-03-11 01:14:34.957117 | orchestrator | Tuesday 11 March 2025 01:11:48 +0000 (0:00:01.505) 0:05:46.173 ********* 2025-03-11 01:14:34.957137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.957153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.957162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.957170 | orchestrator | 2025-03-11 01:14:34.957179 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-03-11 01:14:34.957187 | orchestrator | Tuesday 11 March 2025 01:11:53 +0000 (0:00:04.458) 0:05:50.632 ********* 2025-03-11 01:14:34.957200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.957209 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.957232 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.957260 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.957269 | orchestrator | 2025-03-11 01:14:34.957277 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-03-11 01:14:34.957285 | orchestrator | Tuesday 11 March 2025 01:11:54 +0000 (0:00:00.990) 0:05:51.623 ********* 2025-03-11 01:14:34.957293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957311 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957335 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957360 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.957368 | orchestrator | 2025-03-11 01:14:34.957376 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-03-11 01:14:34.957384 | orchestrator | Tuesday 11 March 2025 01:11:55 +0000 (0:00:01.267) 0:05:52.890 ********* 2025-03-11 01:14:34.957392 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957404 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957412 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.957420 | orchestrator | 2025-03-11 01:14:34.957429 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-03-11 01:14:34.957440 | orchestrator | Tuesday 11 March 2025 01:11:56 +0000 (0:00:00.602) 0:05:53.492 ********* 2025-03-11 01:14:34.957448 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957457 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957464 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.957472 | orchestrator | 2025-03-11 01:14:34.957481 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-03-11 01:14:34.957489 | orchestrator | Tuesday 11 March 2025 01:11:57 +0000 (0:00:01.679) 0:05:55.172 ********* 2025-03-11 01:14:34.957497 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.957505 | orchestrator | 2025-03-11 01:14:34.957513 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-03-11 01:14:34.957521 | orchestrator | Tuesday 11 March 2025 01:11:59 +0000 (0:00:01.883) 0:05:57.056 ********* 2025-03-11 01:14:34.957546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.957557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.957566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.957585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957690 | orchestrator | 2025-03-11 01:14:34.957698 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-03-11 01:14:34.957706 | orchestrator | Tuesday 11 March 2025 01:12:06 +0000 (0:00:06.421) 0:06:03.478 ********* 2025-03-11 01:14:34.957714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.957741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957759 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.957784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957801 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.957836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.957857 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.957865 | orchestrator | 2025-03-11 01:14:34.957873 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-03-11 01:14:34.957881 | orchestrator | Tuesday 11 March 2025 01:12:07 +0000 (0:00:01.213) 0:06:04.691 ********* 2025-03-11 01:14:34.957889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957930 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.957938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957962 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.957971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-11 01:14:34.957998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:14:34.958007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-11 01:14:34.958037 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958046 | orchestrator | 2025-03-11 01:14:34.958059 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-03-11 01:14:34.958067 | orchestrator | Tuesday 11 March 2025 01:12:08 +0000 (0:00:01.597) 0:06:06.288 ********* 2025-03-11 01:14:34.958075 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958087 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958095 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958109 | orchestrator | 2025-03-11 01:14:34.958117 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-03-11 01:14:34.958126 | orchestrator | Tuesday 11 March 2025 01:12:09 +0000 (0:00:00.350) 0:06:06.639 ********* 2025-03-11 01:14:34.958134 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958142 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958150 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958158 | orchestrator | 2025-03-11 01:14:34.958166 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-03-11 01:14:34.958173 | orchestrator | Tuesday 11 March 2025 01:12:11 +0000 (0:00:01.757) 0:06:08.397 ********* 2025-03-11 01:14:34.958180 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.958187 | orchestrator | 2025-03-11 01:14:34.958194 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-03-11 01:14:34.958201 | orchestrator | Tuesday 11 March 2025 01:12:13 +0000 (0:00:01.974) 0:06:10.371 ********* 2025-03-11 01:14:34.958208 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-03-11 01:14:34.958216 | orchestrator | 2025-03-11 01:14:34.958223 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-03-11 01:14:34.958230 | orchestrator | Tuesday 11 March 2025 01:12:14 +0000 (0:00:01.884) 0:06:12.255 ********* 2025-03-11 01:14:34.958237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-11 01:14:34.958245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-11 01:14:34.958252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-11 01:14:34.958259 | orchestrator | 2025-03-11 01:14:34.958266 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-03-11 01:14:34.958274 | orchestrator | Tuesday 11 March 2025 01:12:20 +0000 (0:00:05.928) 0:06:18.184 ********* 2025-03-11 01:14:34.958291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958299 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958324 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958338 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958345 | orchestrator | 2025-03-11 01:14:34.958352 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-03-11 01:14:34.958359 | orchestrator | Tuesday 11 March 2025 01:12:22 +0000 (0:00:01.930) 0:06:20.114 ********* 2025-03-11 01:14:34.958366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:14:34.958374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:14:34.958381 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:14:34.958418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:14:34.958427 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:14:34.958441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-11 01:14:34.958449 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958455 | orchestrator | 2025-03-11 01:14:34.958462 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-11 01:14:34.958470 | orchestrator | Tuesday 11 March 2025 01:12:25 +0000 (0:00:02.794) 0:06:22.909 ********* 2025-03-11 01:14:34.958477 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958484 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958491 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958498 | orchestrator | 2025-03-11 01:14:34.958505 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-11 01:14:34.958512 | orchestrator | Tuesday 11 March 2025 01:12:26 +0000 (0:00:00.602) 0:06:23.511 ********* 2025-03-11 01:14:34.958519 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958526 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958537 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958544 | orchestrator | 2025-03-11 01:14:34.958551 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-03-11 01:14:34.958558 | orchestrator | Tuesday 11 March 2025 01:12:27 +0000 (0:00:01.204) 0:06:24.715 ********* 2025-03-11 01:14:34.958565 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-03-11 01:14:34.958572 | orchestrator | 2025-03-11 01:14:34.958579 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-03-11 01:14:34.958586 | orchestrator | Tuesday 11 March 2025 01:12:28 +0000 (0:00:01.500) 0:06:26.215 ********* 2025-03-11 01:14:34.958607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958628 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958644 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958658 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958666 | orchestrator | 2025-03-11 01:14:34.958673 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-03-11 01:14:34.958680 | orchestrator | Tuesday 11 March 2025 01:12:30 +0000 (0:00:01.940) 0:06:28.155 ********* 2025-03-11 01:14:34.958687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958694 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958713 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-11 01:14:34.958728 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958735 | orchestrator | 2025-03-11 01:14:34.958745 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-03-11 01:14:34.958753 | orchestrator | Tuesday 11 March 2025 01:12:32 +0000 (0:00:01.879) 0:06:30.035 ********* 2025-03-11 01:14:34.958760 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958767 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958774 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958781 | orchestrator | 2025-03-11 01:14:34.958788 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-11 01:14:34.958795 | orchestrator | Tuesday 11 March 2025 01:12:35 +0000 (0:00:02.813) 0:06:32.849 ********* 2025-03-11 01:14:34.958802 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958809 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958826 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958833 | orchestrator | 2025-03-11 01:14:34.958841 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-11 01:14:34.958849 | orchestrator | Tuesday 11 March 2025 01:12:35 +0000 (0:00:00.381) 0:06:33.231 ********* 2025-03-11 01:14:34.958856 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958863 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958870 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958877 | orchestrator | 2025-03-11 01:14:34.958884 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-03-11 01:14:34.958891 | orchestrator | Tuesday 11 March 2025 01:12:37 +0000 (0:00:01.207) 0:06:34.438 ********* 2025-03-11 01:14:34.958898 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-03-11 01:14:34.958906 | orchestrator | 2025-03-11 01:14:34.958913 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-03-11 01:14:34.958920 | orchestrator | Tuesday 11 March 2025 01:12:38 +0000 (0:00:01.727) 0:06:36.166 ********* 2025-03-11 01:14:34.958933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:14:34.958941 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.958948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:14:34.958956 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.958963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:14:34.958974 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.958981 | orchestrator | 2025-03-11 01:14:34.958988 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-03-11 01:14:34.958996 | orchestrator | Tuesday 11 March 2025 01:12:41 +0000 (0:00:02.227) 0:06:38.393 ********* 2025-03-11 01:14:34.959003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:14:34.959010 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:14:34.959025 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-11 01:14:34.959050 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959057 | orchestrator | 2025-03-11 01:14:34.959065 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-03-11 01:14:34.959072 | orchestrator | Tuesday 11 March 2025 01:12:42 +0000 (0:00:01.699) 0:06:40.092 ********* 2025-03-11 01:14:34.959079 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959086 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959093 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959100 | orchestrator | 2025-03-11 01:14:34.959107 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-11 01:14:34.959114 | orchestrator | Tuesday 11 March 2025 01:12:45 +0000 (0:00:03.022) 0:06:43.115 ********* 2025-03-11 01:14:34.959121 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959128 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959135 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959142 | orchestrator | 2025-03-11 01:14:34.959149 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-11 01:14:34.959156 | orchestrator | Tuesday 11 March 2025 01:12:46 +0000 (0:00:00.372) 0:06:43.487 ********* 2025-03-11 01:14:34.959163 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959171 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959178 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959185 | orchestrator | 2025-03-11 01:14:34.959197 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-03-11 01:14:34.959205 | orchestrator | Tuesday 11 March 2025 01:12:47 +0000 (0:00:01.620) 0:06:45.108 ********* 2025-03-11 01:14:34.959212 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.959219 | orchestrator | 2025-03-11 01:14:34.959226 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-03-11 01:14:34.959233 | orchestrator | Tuesday 11 March 2025 01:12:49 +0000 (0:00:01.887) 0:06:46.996 ********* 2025-03-11 01:14:34.959241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.959254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:14:34.959262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.959302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.959314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:14:34.959322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.959359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.959372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:14:34.959380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.959402 | orchestrator | 2025-03-11 01:14:34.959409 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-03-11 01:14:34.959417 | orchestrator | Tuesday 11 March 2025 01:12:54 +0000 (0:00:04.887) 0:06:51.883 ********* 2025-03-11 01:14:34.959439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.959449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:14:34.959461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.959483 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.959503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:14:34.959520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.959548 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.959562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-11 01:14:34.959574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-11 01:14:34.959604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-11 01:14:34.959624 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959631 | orchestrator | 2025-03-11 01:14:34.959638 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-03-11 01:14:34.959646 | orchestrator | Tuesday 11 March 2025 01:12:55 +0000 (0:00:01.346) 0:06:53.230 ********* 2025-03-11 01:14:34.959653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:14:34.959660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:14:34.959667 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:14:34.959682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:14:34.959689 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:14:34.959704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-11 01:14:34.959712 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959719 | orchestrator | 2025-03-11 01:14:34.959727 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-03-11 01:14:34.959734 | orchestrator | Tuesday 11 March 2025 01:12:57 +0000 (0:00:01.597) 0:06:54.827 ********* 2025-03-11 01:14:34.959741 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959748 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959755 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959762 | orchestrator | 2025-03-11 01:14:34.959769 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-03-11 01:14:34.959776 | orchestrator | Tuesday 11 March 2025 01:12:58 +0000 (0:00:00.610) 0:06:55.438 ********* 2025-03-11 01:14:34.959783 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959790 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.959797 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.959804 | orchestrator | 2025-03-11 01:14:34.959812 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-03-11 01:14:34.959819 | orchestrator | Tuesday 11 March 2025 01:12:59 +0000 (0:00:01.787) 0:06:57.225 ********* 2025-03-11 01:14:34.959828 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.959835 | orchestrator | 2025-03-11 01:14:34.959846 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-03-11 01:14:34.959854 | orchestrator | Tuesday 11 March 2025 01:13:01 +0000 (0:00:01.692) 0:06:58.918 ********* 2025-03-11 01:14:34.959871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-11 01:14:34.959880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-11 01:14:34.959893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-11 01:14:34.959901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-11 01:14:34.959909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-11 01:14:34.959931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-11 01:14:34.959945 | orchestrator | 2025-03-11 01:14:34.959953 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-03-11 01:14:34.959960 | orchestrator | Tuesday 11 March 2025 01:13:09 +0000 (0:00:07.494) 0:07:06.413 ********* 2025-03-11 01:14:34.959967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-11 01:14:34.959975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-11 01:14:34.959988 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.959995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-11 01:14:34.960018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-11 01:14:34.960027 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.960034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-11 01:14:34.960042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-11 01:14:34.960053 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.960060 | orchestrator | 2025-03-11 01:14:34.960067 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-03-11 01:14:34.960074 | orchestrator | Tuesday 11 March 2025 01:13:10 +0000 (0:00:00.996) 0:07:07.409 ********* 2025-03-11 01:14:34.960081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-11 01:14:34.960089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:14:34.960096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:14:34.960104 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.960111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-11 01:14:34.960118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:14:34.960135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:14:34.960142 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.960154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-11 01:14:34.960161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:14:34.960168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-11 01:14:34.960176 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.960183 | orchestrator | 2025-03-11 01:14:34.960190 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-03-11 01:14:34.960197 | orchestrator | Tuesday 11 March 2025 01:13:11 +0000 (0:00:01.687) 0:07:09.097 ********* 2025-03-11 01:14:34.960204 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.960211 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.960218 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.960225 | orchestrator | 2025-03-11 01:14:34.960232 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-03-11 01:14:34.960240 | orchestrator | Tuesday 11 March 2025 01:13:12 +0000 (0:00:00.646) 0:07:09.743 ********* 2025-03-11 01:14:34.960247 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.960254 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.960261 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.960268 | orchestrator | 2025-03-11 01:14:34.960275 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-03-11 01:14:34.960282 | orchestrator | Tuesday 11 March 2025 01:13:14 +0000 (0:00:01.933) 0:07:11.677 ********* 2025-03-11 01:14:34.960289 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.960300 | orchestrator | 2025-03-11 01:14:34.960307 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-03-11 01:14:34.960314 | orchestrator | Tuesday 11 March 2025 01:13:16 +0000 (0:00:02.202) 0:07:13.879 ********* 2025-03-11 01:14:34.960321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-11 01:14:34.960329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:14:34.960336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-11 01:14:34.960368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:14:34.960394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-11 01:14:34.960434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:14:34.960441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-11 01:14:34.960482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:14:34.960502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-11 01:14:34.960554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:14:34.960571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-11 01:14:34.960656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:14:34.960668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960705 | orchestrator | 2025-03-11 01:14:34.960713 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-03-11 01:14:34.960720 | orchestrator | Tuesday 11 March 2025 01:13:22 +0000 (0:00:05.499) 0:07:19.378 ********* 2025-03-11 01:14:34.960727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-11 01:14:34.960734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:14:34.960742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-11 01:14:34.960788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:14:34.960795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960838 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.960845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-11 01:14:34.960853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:14:34.960860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-11 01:14:34.960902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:14:34.960909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-11 01:14:34.960931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-11 01:14:34.960954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960966 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.960973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.960988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.960995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-11 01:14:34.961008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-11 01:14:34.961018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.961030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.961037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-11 01:14:34.961045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-11 01:14:34.961052 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961059 | orchestrator | 2025-03-11 01:14:34.961066 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-03-11 01:14:34.961073 | orchestrator | Tuesday 11 March 2025 01:13:23 +0000 (0:00:01.926) 0:07:21.305 ********* 2025-03-11 01:14:34.961081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-11 01:14:34.961088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-11 01:14:34.961095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:14:34.961103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:14:34.961110 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-11 01:14:34.961124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-11 01:14:34.961132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:14:34.961143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:14:34.961151 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-11 01:14:34.961171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-11 01:14:34.961179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:14:34.961186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-11 01:14:34.961253 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961261 | orchestrator | 2025-03-11 01:14:34.961269 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-03-11 01:14:34.961276 | orchestrator | Tuesday 11 March 2025 01:13:25 +0000 (0:00:01.867) 0:07:23.172 ********* 2025-03-11 01:14:34.961283 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961290 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961297 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961304 | orchestrator | 2025-03-11 01:14:34.961311 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-03-11 01:14:34.961318 | orchestrator | Tuesday 11 March 2025 01:13:26 +0000 (0:00:00.343) 0:07:23.516 ********* 2025-03-11 01:14:34.961325 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961332 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961339 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961346 | orchestrator | 2025-03-11 01:14:34.961353 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-03-11 01:14:34.961360 | orchestrator | Tuesday 11 March 2025 01:13:27 +0000 (0:00:01.787) 0:07:25.303 ********* 2025-03-11 01:14:34.961368 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.961374 | orchestrator | 2025-03-11 01:14:34.961381 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-03-11 01:14:34.961389 | orchestrator | Tuesday 11 March 2025 01:13:30 +0000 (0:00:02.086) 0:07:27.390 ********* 2025-03-11 01:14:34.961396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:14:34.961411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:14:34.961422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-11 01:14:34.961430 | orchestrator | 2025-03-11 01:14:34.961437 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-03-11 01:14:34.961444 | orchestrator | Tuesday 11 March 2025 01:13:33 +0000 (0:00:03.559) 0:07:30.949 ********* 2025-03-11 01:14:34.961452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-11 01:14:34.961459 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-11 01:14:34.961478 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-11 01:14:34.961492 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961502 | orchestrator | 2025-03-11 01:14:34.961509 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-03-11 01:14:34.961516 | orchestrator | Tuesday 11 March 2025 01:13:34 +0000 (0:00:00.421) 0:07:31.370 ********* 2025-03-11 01:14:34.961526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-11 01:14:34.961534 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-11 01:14:34.961548 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-11 01:14:34.961562 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961569 | orchestrator | 2025-03-11 01:14:34.961577 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-03-11 01:14:34.961584 | orchestrator | Tuesday 11 March 2025 01:13:35 +0000 (0:00:01.244) 0:07:32.615 ********* 2025-03-11 01:14:34.961591 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961598 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961605 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961646 | orchestrator | 2025-03-11 01:14:34.961654 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-03-11 01:14:34.961661 | orchestrator | Tuesday 11 March 2025 01:13:35 +0000 (0:00:00.651) 0:07:33.267 ********* 2025-03-11 01:14:34.961668 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961675 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961682 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961689 | orchestrator | 2025-03-11 01:14:34.961696 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-03-11 01:14:34.961703 | orchestrator | Tuesday 11 March 2025 01:13:37 +0000 (0:00:01.481) 0:07:34.748 ********* 2025-03-11 01:14:34.961710 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-11 01:14:34.961717 | orchestrator | 2025-03-11 01:14:34.961724 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-03-11 01:14:34.961735 | orchestrator | Tuesday 11 March 2025 01:13:39 +0000 (0:00:02.080) 0:07:36.829 ********* 2025-03-11 01:14:34.961742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.961750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.961761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.961769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.961777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.961788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-11 01:14:34.961796 | orchestrator | 2025-03-11 01:14:34.961803 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-03-11 01:14:34.961810 | orchestrator | Tuesday 11 March 2025 01:13:48 +0000 (0:00:08.885) 0:07:45.714 ********* 2025-03-11 01:14:34.961818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.961829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.961836 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.961855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.961862 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.961880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-11 01:14:34.961888 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.961895 | orchestrator | 2025-03-11 01:14:34.961902 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-03-11 01:14:34.961909 | orchestrator | Tuesday 11 March 2025 01:13:49 +0000 (0:00:01.064) 0:07:46.779 ********* 2025-03-11 01:14:34.961920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961949 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.961956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961985 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.961992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:14:34.961999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-11 01:14:34.962006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:14:34.962029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-11 01:14:34.962037 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962044 | orchestrator | 2025-03-11 01:14:34.962052 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-03-11 01:14:34.962058 | orchestrator | Tuesday 11 March 2025 01:13:51 +0000 (0:00:01.659) 0:07:48.438 ********* 2025-03-11 01:14:34.962064 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962074 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962080 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962086 | orchestrator | 2025-03-11 01:14:34.962093 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-03-11 01:14:34.962099 | orchestrator | Tuesday 11 March 2025 01:13:51 +0000 (0:00:00.651) 0:07:49.090 ********* 2025-03-11 01:14:34.962105 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962115 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962125 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962131 | orchestrator | 2025-03-11 01:14:34.962138 | orchestrator | TASK [include_role : swift] **************************************************** 2025-03-11 01:14:34.962144 | orchestrator | Tuesday 11 March 2025 01:13:53 +0000 (0:00:01.871) 0:07:50.962 ********* 2025-03-11 01:14:34.962150 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962156 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962163 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962169 | orchestrator | 2025-03-11 01:14:34.962175 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-03-11 01:14:34.962181 | orchestrator | Tuesday 11 March 2025 01:13:53 +0000 (0:00:00.348) 0:07:51.311 ********* 2025-03-11 01:14:34.962187 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962194 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962200 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962206 | orchestrator | 2025-03-11 01:14:34.962213 | orchestrator | TASK [include_role : trove] **************************************************** 2025-03-11 01:14:34.962219 | orchestrator | Tuesday 11 March 2025 01:13:54 +0000 (0:00:00.655) 0:07:51.966 ********* 2025-03-11 01:14:34.962225 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962231 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962237 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962243 | orchestrator | 2025-03-11 01:14:34.962250 | orchestrator | TASK [include_role : venus] **************************************************** 2025-03-11 01:14:34.962256 | orchestrator | Tuesday 11 March 2025 01:13:55 +0000 (0:00:00.656) 0:07:52.622 ********* 2025-03-11 01:14:34.962262 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962269 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962275 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962281 | orchestrator | 2025-03-11 01:14:34.962287 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-03-11 01:14:34.962293 | orchestrator | Tuesday 11 March 2025 01:13:55 +0000 (0:00:00.660) 0:07:53.283 ********* 2025-03-11 01:14:34.962300 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962306 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962312 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962318 | orchestrator | 2025-03-11 01:14:34.962325 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-03-11 01:14:34.962331 | orchestrator | Tuesday 11 March 2025 01:13:56 +0000 (0:00:00.383) 0:07:53.667 ********* 2025-03-11 01:14:34.962337 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962343 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962349 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962356 | orchestrator | 2025-03-11 01:14:34.962362 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-03-11 01:14:34.962368 | orchestrator | Tuesday 11 March 2025 01:13:57 +0000 (0:00:01.105) 0:07:54.772 ********* 2025-03-11 01:14:34.962374 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.962381 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.962387 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.962393 | orchestrator | 2025-03-11 01:14:34.962400 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-03-11 01:14:34.962409 | orchestrator | Tuesday 11 March 2025 01:13:58 +0000 (0:00:00.692) 0:07:55.465 ********* 2025-03-11 01:14:34.962416 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.962422 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.962428 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.962434 | orchestrator | 2025-03-11 01:14:34.962441 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-03-11 01:14:34.962447 | orchestrator | Tuesday 11 March 2025 01:13:58 +0000 (0:00:00.685) 0:07:56.150 ********* 2025-03-11 01:14:34.962453 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.962459 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.962466 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.962472 | orchestrator | 2025-03-11 01:14:34.962482 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-03-11 01:14:34.962489 | orchestrator | Tuesday 11 March 2025 01:14:00 +0000 (0:00:01.525) 0:07:57.676 ********* 2025-03-11 01:14:34.962495 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.962501 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.962508 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.962514 | orchestrator | 2025-03-11 01:14:34.962520 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-03-11 01:14:34.962526 | orchestrator | Tuesday 11 March 2025 01:14:01 +0000 (0:00:01.393) 0:07:59.069 ********* 2025-03-11 01:14:34.962533 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.962539 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.962545 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.962551 | orchestrator | 2025-03-11 01:14:34.962558 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-03-11 01:14:34.962564 | orchestrator | Tuesday 11 March 2025 01:14:02 +0000 (0:00:01.135) 0:08:00.205 ********* 2025-03-11 01:14:34.962570 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:14:34.962577 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:14:34.962583 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:14:34.962589 | orchestrator | 2025-03-11 01:14:34.962596 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-03-11 01:14:34.962602 | orchestrator | Tuesday 11 March 2025 01:14:14 +0000 (0:00:11.386) 0:08:11.591 ********* 2025-03-11 01:14:34.962608 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.962625 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.962632 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.962638 | orchestrator | 2025-03-11 01:14:34.962645 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-03-11 01:14:34.962651 | orchestrator | Tuesday 11 March 2025 01:14:15 +0000 (0:00:01.422) 0:08:13.013 ********* 2025-03-11 01:14:34.962657 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962664 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962670 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962676 | orchestrator | 2025-03-11 01:14:34.962682 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-03-11 01:14:34.962689 | orchestrator | Tuesday 11 March 2025 01:14:16 +0000 (0:00:00.970) 0:08:13.984 ********* 2025-03-11 01:14:34.962695 | orchestrator | changed: [testbed-node-0] 2025-03-11 01:14:34.962701 | orchestrator | changed: [testbed-node-1] 2025-03-11 01:14:34.962707 | orchestrator | changed: [testbed-node-2] 2025-03-11 01:14:34.962713 | orchestrator | 2025-03-11 01:14:34.962723 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-03-11 01:14:34.962729 | orchestrator | Tuesday 11 March 2025 01:14:27 +0000 (0:00:11.047) 0:08:25.032 ********* 2025-03-11 01:14:34.962736 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962745 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962751 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962758 | orchestrator | 2025-03-11 01:14:34.962764 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-03-11 01:14:34.962770 | orchestrator | Tuesday 11 March 2025 01:14:28 +0000 (0:00:00.691) 0:08:25.724 ********* 2025-03-11 01:14:34.962776 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962782 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962789 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962795 | orchestrator | 2025-03-11 01:14:34.962801 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-03-11 01:14:34.962807 | orchestrator | Tuesday 11 March 2025 01:14:29 +0000 (0:00:00.701) 0:08:26.426 ********* 2025-03-11 01:14:34.962814 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962820 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962826 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962832 | orchestrator | 2025-03-11 01:14:34.962838 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-03-11 01:14:34.962848 | orchestrator | Tuesday 11 March 2025 01:14:29 +0000 (0:00:00.392) 0:08:26.818 ********* 2025-03-11 01:14:34.962855 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962861 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962868 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962874 | orchestrator | 2025-03-11 01:14:34.962880 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-03-11 01:14:34.962886 | orchestrator | Tuesday 11 March 2025 01:14:30 +0000 (0:00:00.669) 0:08:27.487 ********* 2025-03-11 01:14:34.962893 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962899 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962905 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962911 | orchestrator | 2025-03-11 01:14:34.962918 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-03-11 01:14:34.962924 | orchestrator | Tuesday 11 March 2025 01:14:30 +0000 (0:00:00.695) 0:08:28.182 ********* 2025-03-11 01:14:34.962930 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.962937 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.962943 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.962949 | orchestrator | 2025-03-11 01:14:34.962955 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-03-11 01:14:34.962962 | orchestrator | Tuesday 11 March 2025 01:14:31 +0000 (0:00:00.395) 0:08:28.578 ********* 2025-03-11 01:14:34.962968 | orchestrator | ok: [testbed-node-0] 2025-03-11 01:14:34.962974 | orchestrator | ok: [testbed-node-1] 2025-03-11 01:14:34.962980 | orchestrator | ok: [testbed-node-2] 2025-03-11 01:14:34.962987 | orchestrator | 2025-03-11 01:14:34.962993 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-03-11 01:14:34.962999 | orchestrator | Tuesday 11 March 2025 01:14:32 +0000 (0:00:01.383) 0:08:29.962 ********* 2025-03-11 01:14:34.963006 | orchestrator | skipping: [testbed-node-0] 2025-03-11 01:14:34.963012 | orchestrator | skipping: [testbed-node-1] 2025-03-11 01:14:34.963018 | orchestrator | skipping: [testbed-node-2] 2025-03-11 01:14:34.963024 | orchestrator | 2025-03-11 01:14:34.963031 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-11 01:14:34.963037 | orchestrator | testbed-node-0 : ok=83  changed=41  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-03-11 01:14:34.963044 | orchestrator | testbed-node-1 : ok=82  changed=41  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-03-11 01:14:34.963051 | orchestrator | testbed-node-2 : ok=82  changed=41  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-03-11 01:14:34.963057 | orchestrator | 2025-03-11 01:14:34.963063 | orchestrator | 2025-03-11 01:14:34.963072 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-11 01:14:34.963079 | orchestrator | Tuesday 11 March 2025 01:14:33 +0000 (0:00:01.132) 0:08:31.095 ********* 2025-03-11 01:14:34.963085 | orchestrator | =============================================================================== 2025-03-11 01:14:34.963092 | orchestrator | haproxy-config : Copying over cinder haproxy config -------------------- 12.65s 2025-03-11 01:14:34.963098 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 11.39s 2025-03-11 01:14:34.963104 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 11.05s 2025-03-11 01:14:34.963110 | orchestrator | haproxy-config : Copying over glance haproxy config -------------------- 10.27s 2025-03-11 01:14:34.963116 | orchestrator | haproxy-config : Copying over heat haproxy config ---------------------- 10.00s 2025-03-11 01:14:34.963123 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 9.54s 2025-03-11 01:14:34.963129 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.89s 2025-03-11 01:14:34.963135 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 8.47s 2025-03-11 01:14:34.963146 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 7.70s 2025-03-11 01:14:34.963153 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.49s 2025-03-11 01:14:34.963159 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 7.38s 2025-03-11 01:14:34.963165 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.31s 2025-03-11 01:14:34.963172 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.20s 2025-03-11 01:14:34.963180 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 7.01s 2025-03-11 01:14:37.991788 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 7.00s 2025-03-11 01:14:37.991971 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.84s 2025-03-11 01:14:37.991983 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 6.81s 2025-03-11 01:14:37.991991 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.42s 2025-03-11 01:14:37.992000 | orchestrator | loadbalancer : Ensuring haproxy service config subdir exists ------------ 6.39s 2025-03-11 01:14:37.992008 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.93s 2025-03-11 01:14:37.992017 | orchestrator | 2025-03-11 01:14:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:37.992040 | orchestrator | 2025-03-11 01:14:37 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:37.992919 | orchestrator | 2025-03-11 01:14:37 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:37.994306 | orchestrator | 2025-03-11 01:14:37 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:41.044740 | orchestrator | 2025-03-11 01:14:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:41.044879 | orchestrator | 2025-03-11 01:14:41 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:41.048206 | orchestrator | 2025-03-11 01:14:41 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:41.048949 | orchestrator | 2025-03-11 01:14:41 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:44.104024 | orchestrator | 2025-03-11 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:44.104154 | orchestrator | 2025-03-11 01:14:44 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:44.108088 | orchestrator | 2025-03-11 01:14:44 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:44.108126 | orchestrator | 2025-03-11 01:14:44 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:47.169769 | orchestrator | 2025-03-11 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:47.169884 | orchestrator | 2025-03-11 01:14:47 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:47.170144 | orchestrator | 2025-03-11 01:14:47 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:47.171124 | orchestrator | 2025-03-11 01:14:47 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:50.223744 | orchestrator | 2025-03-11 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:50.223884 | orchestrator | 2025-03-11 01:14:50 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:50.224103 | orchestrator | 2025-03-11 01:14:50 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:53.271195 | orchestrator | 2025-03-11 01:14:50 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:53.271343 | orchestrator | 2025-03-11 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:53.271382 | orchestrator | 2025-03-11 01:14:53 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:53.283464 | orchestrator | 2025-03-11 01:14:53 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:56.323674 | orchestrator | 2025-03-11 01:14:53 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:56.323791 | orchestrator | 2025-03-11 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:56.323825 | orchestrator | 2025-03-11 01:14:56 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:56.324356 | orchestrator | 2025-03-11 01:14:56 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:56.328143 | orchestrator | 2025-03-11 01:14:56 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:59.376999 | orchestrator | 2025-03-11 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:14:59.377125 | orchestrator | 2025-03-11 01:14:59 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:14:59.377819 | orchestrator | 2025-03-11 01:14:59 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:14:59.377853 | orchestrator | 2025-03-11 01:14:59 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:14:59.378511 | orchestrator | 2025-03-11 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:02.426512 | orchestrator | 2025-03-11 01:15:02 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:02.427036 | orchestrator | 2025-03-11 01:15:02 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:02.427861 | orchestrator | 2025-03-11 01:15:02 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:02.428448 | orchestrator | 2025-03-11 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:05.476397 | orchestrator | 2025-03-11 01:15:05 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:05.479480 | orchestrator | 2025-03-11 01:15:05 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:05.480688 | orchestrator | 2025-03-11 01:15:05 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:08.529161 | orchestrator | 2025-03-11 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:08.529295 | orchestrator | 2025-03-11 01:15:08 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:08.530380 | orchestrator | 2025-03-11 01:15:08 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:08.532753 | orchestrator | 2025-03-11 01:15:08 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:11.580015 | orchestrator | 2025-03-11 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:11.580130 | orchestrator | 2025-03-11 01:15:11 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:11.582521 | orchestrator | 2025-03-11 01:15:11 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:11.591191 | orchestrator | 2025-03-11 01:15:11 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:14.663320 | orchestrator | 2025-03-11 01:15:11 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:14.663480 | orchestrator | 2025-03-11 01:15:14 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:14.663765 | orchestrator | 2025-03-11 01:15:14 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:14.663799 | orchestrator | 2025-03-11 01:15:14 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:17.704353 | orchestrator | 2025-03-11 01:15:14 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:17.704485 | orchestrator | 2025-03-11 01:15:17 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:17.713172 | orchestrator | 2025-03-11 01:15:17 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:17.717077 | orchestrator | 2025-03-11 01:15:17 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:20.785246 | orchestrator | 2025-03-11 01:15:17 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:20.785374 | orchestrator | 2025-03-11 01:15:20 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:20.786169 | orchestrator | 2025-03-11 01:15:20 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:20.787650 | orchestrator | 2025-03-11 01:15:20 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:20.787944 | orchestrator | 2025-03-11 01:15:20 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:23.837469 | orchestrator | 2025-03-11 01:15:23 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:23.837800 | orchestrator | 2025-03-11 01:15:23 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:23.837844 | orchestrator | 2025-03-11 01:15:23 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:26.885627 | orchestrator | 2025-03-11 01:15:23 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:26.885754 | orchestrator | 2025-03-11 01:15:26 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:26.891885 | orchestrator | 2025-03-11 01:15:26 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:26.893637 | orchestrator | 2025-03-11 01:15:26 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:29.939286 | orchestrator | 2025-03-11 01:15:26 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:29.939429 | orchestrator | 2025-03-11 01:15:29 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:29.940064 | orchestrator | 2025-03-11 01:15:29 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:29.942253 | orchestrator | 2025-03-11 01:15:29 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:29.942787 | orchestrator | 2025-03-11 01:15:29 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:32.984049 | orchestrator | 2025-03-11 01:15:32 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:32.984952 | orchestrator | 2025-03-11 01:15:32 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:32.986243 | orchestrator | 2025-03-11 01:15:32 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:36.030342 | orchestrator | 2025-03-11 01:15:32 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:36.030546 | orchestrator | 2025-03-11 01:15:36 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:36.030683 | orchestrator | 2025-03-11 01:15:36 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:36.031971 | orchestrator | 2025-03-11 01:15:36 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:39.084507 | orchestrator | 2025-03-11 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:39.084685 | orchestrator | 2025-03-11 01:15:39 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:39.089657 | orchestrator | 2025-03-11 01:15:39 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:39.090312 | orchestrator | 2025-03-11 01:15:39 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:42.135217 | orchestrator | 2025-03-11 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:42.135344 | orchestrator | 2025-03-11 01:15:42 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:42.136295 | orchestrator | 2025-03-11 01:15:42 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:42.136946 | orchestrator | 2025-03-11 01:15:42 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:42.137217 | orchestrator | 2025-03-11 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:45.177024 | orchestrator | 2025-03-11 01:15:45 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:45.177232 | orchestrator | 2025-03-11 01:15:45 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:45.178160 | orchestrator | 2025-03-11 01:15:45 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:48.236162 | orchestrator | 2025-03-11 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:48.236294 | orchestrator | 2025-03-11 01:15:48 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:48.238117 | orchestrator | 2025-03-11 01:15:48 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:48.239705 | orchestrator | 2025-03-11 01:15:48 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:51.321121 | orchestrator | 2025-03-11 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:51.321229 | orchestrator | 2025-03-11 01:15:51 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:51.324432 | orchestrator | 2025-03-11 01:15:51 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:51.324463 | orchestrator | 2025-03-11 01:15:51 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:54.372778 | orchestrator | 2025-03-11 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:54.372875 | orchestrator | 2025-03-11 01:15:54 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:54.372910 | orchestrator | 2025-03-11 01:15:54 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:54.373677 | orchestrator | 2025-03-11 01:15:54 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:15:54.373831 | orchestrator | 2025-03-11 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:15:57.420990 | orchestrator | 2025-03-11 01:15:57 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:15:57.421333 | orchestrator | 2025-03-11 01:15:57 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:15:57.422224 | orchestrator | 2025-03-11 01:15:57 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:00.479471 | orchestrator | 2025-03-11 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:00.479682 | orchestrator | 2025-03-11 01:16:00 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:00.479772 | orchestrator | 2025-03-11 01:16:00 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:00.481875 | orchestrator | 2025-03-11 01:16:00 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:03.580505 | orchestrator | 2025-03-11 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:03.580683 | orchestrator | 2025-03-11 01:16:03 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:03.586349 | orchestrator | 2025-03-11 01:16:03 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:03.592180 | orchestrator | 2025-03-11 01:16:03 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:06.651205 | orchestrator | 2025-03-11 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:06.651342 | orchestrator | 2025-03-11 01:16:06 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:06.652074 | orchestrator | 2025-03-11 01:16:06 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:06.653564 | orchestrator | 2025-03-11 01:16:06 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:09.711729 | orchestrator | 2025-03-11 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:09.711847 | orchestrator | 2025-03-11 01:16:09 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:09.713968 | orchestrator | 2025-03-11 01:16:09 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:09.716084 | orchestrator | 2025-03-11 01:16:09 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:09.716377 | orchestrator | 2025-03-11 01:16:09 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:12.779650 | orchestrator | 2025-03-11 01:16:12 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:12.782721 | orchestrator | 2025-03-11 01:16:12 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:12.783827 | orchestrator | 2025-03-11 01:16:12 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:15.837049 | orchestrator | 2025-03-11 01:16:12 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:15.837197 | orchestrator | 2025-03-11 01:16:15 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:15.837260 | orchestrator | 2025-03-11 01:16:15 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:15.838288 | orchestrator | 2025-03-11 01:16:15 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:18.891873 | orchestrator | 2025-03-11 01:16:15 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:18.891961 | orchestrator | 2025-03-11 01:16:18 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:21.938525 | orchestrator | 2025-03-11 01:16:18 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:21.938719 | orchestrator | 2025-03-11 01:16:18 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:21.938741 | orchestrator | 2025-03-11 01:16:18 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:21.938774 | orchestrator | 2025-03-11 01:16:21 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:21.945897 | orchestrator | 2025-03-11 01:16:21 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:21.946631 | orchestrator | 2025-03-11 01:16:21 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:21.946726 | orchestrator | 2025-03-11 01:16:21 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:24.992378 | orchestrator | 2025-03-11 01:16:24 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:24.996300 | orchestrator | 2025-03-11 01:16:24 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:24.996345 | orchestrator | 2025-03-11 01:16:24 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:28.037087 | orchestrator | 2025-03-11 01:16:24 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:28.037218 | orchestrator | 2025-03-11 01:16:28 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:28.040207 | orchestrator | 2025-03-11 01:16:28 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:31.089727 | orchestrator | 2025-03-11 01:16:28 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:31.089832 | orchestrator | 2025-03-11 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:31.089865 | orchestrator | 2025-03-11 01:16:31 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:31.090742 | orchestrator | 2025-03-11 01:16:31 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:31.090780 | orchestrator | 2025-03-11 01:16:31 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:34.144297 | orchestrator | 2025-03-11 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:34.144423 | orchestrator | 2025-03-11 01:16:34 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:34.144771 | orchestrator | 2025-03-11 01:16:34 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:34.146003 | orchestrator | 2025-03-11 01:16:34 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:37.196051 | orchestrator | 2025-03-11 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:37.196179 | orchestrator | 2025-03-11 01:16:37 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:37.197096 | orchestrator | 2025-03-11 01:16:37 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:37.199345 | orchestrator | 2025-03-11 01:16:37 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:40.254171 | orchestrator | 2025-03-11 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:40.254294 | orchestrator | 2025-03-11 01:16:40 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:40.254733 | orchestrator | 2025-03-11 01:16:40 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:40.256222 | orchestrator | 2025-03-11 01:16:40 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:43.318162 | orchestrator | 2025-03-11 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:43.318299 | orchestrator | 2025-03-11 01:16:43 | INFO  | Task e04ade02-6f72-4823-bd57-55fc432409b6 is in state STARTED 2025-03-11 01:16:43.319478 | orchestrator | 2025-03-11 01:16:43 | INFO  | Task bc85befc-f8a3-4821-9888-1c9f23e3c774 is in state STARTED 2025-03-11 01:16:43.319631 | orchestrator | 2025-03-11 01:16:43 | INFO  | Task 7778a798-31f2-49ba-be77-d4b021b5d0d3 is in state STARTED 2025-03-11 01:16:43.319972 | orchestrator | 2025-03-11 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-03-11 01:16:45.063434 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-03-11 01:16:45.071084 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-11 01:16:45.791499 | 2025-03-11 01:16:45.791654 | PLAY [Post output play] 2025-03-11 01:16:45.823256 | 2025-03-11 01:16:45.823390 | LOOP [stage-output : Register sources] 2025-03-11 01:16:45.904830 | 2025-03-11 01:16:45.905054 | TASK [stage-output : Check sudo] 2025-03-11 01:16:46.582538 | orchestrator | sudo: a password is required 2025-03-11 01:16:46.946578 | orchestrator | ok: Runtime: 0:00:00.015115 2025-03-11 01:16:46.965491 | 2025-03-11 01:16:46.965649 | LOOP [stage-output : Set source and destination for files and folders] 2025-03-11 01:16:47.010407 | 2025-03-11 01:16:47.010684 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-03-11 01:16:47.102521 | orchestrator | ok 2025-03-11 01:16:47.113419 | 2025-03-11 01:16:47.113545 | LOOP [stage-output : Ensure target folders exist] 2025-03-11 01:16:47.571766 | orchestrator | ok: "docs" 2025-03-11 01:16:47.572127 | 2025-03-11 01:16:47.807952 | orchestrator | ok: "artifacts" 2025-03-11 01:16:48.042009 | orchestrator | ok: "logs" 2025-03-11 01:16:48.051992 | 2025-03-11 01:16:48.052115 | LOOP [stage-output : Copy files and folders to staging folder] 2025-03-11 01:16:48.095093 | 2025-03-11 01:16:48.095293 | TASK [stage-output : Make all log files readable] 2025-03-11 01:16:48.370665 | orchestrator | ok 2025-03-11 01:16:48.380872 | 2025-03-11 01:16:48.381001 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-03-11 01:16:48.436432 | orchestrator | skipping: Conditional result was False 2025-03-11 01:16:48.451413 | 2025-03-11 01:16:48.451555 | TASK [stage-output : Discover log files for compression] 2025-03-11 01:16:48.476579 | orchestrator | skipping: Conditional result was False 2025-03-11 01:16:48.490757 | 2025-03-11 01:16:48.490880 | LOOP [stage-output : Archive everything from logs] 2025-03-11 01:16:48.563779 | 2025-03-11 01:16:48.563913 | PLAY [Post cleanup play] 2025-03-11 01:16:48.587234 | 2025-03-11 01:16:48.587346 | TASK [Set cloud fact (Zuul deployment)] 2025-03-11 01:16:48.653423 | orchestrator | ok 2025-03-11 01:16:48.668750 | 2025-03-11 01:16:48.668860 | TASK [Set cloud fact (local deployment)] 2025-03-11 01:16:48.706402 | orchestrator | skipping: Conditional result was False 2025-03-11 01:16:48.717350 | 2025-03-11 01:16:48.717459 | TASK [Clean the cloud environment] 2025-03-11 01:16:49.347813 | orchestrator | 2025-03-11 01:16:49 - clean up servers 2025-03-11 01:16:52.767326 | orchestrator | 2025-03-11 01:16:52 - testbed-manager 2025-03-11 01:16:52.849533 | orchestrator | 2025-03-11 01:16:52 - testbed-node-2 2025-03-11 01:16:52.934214 | orchestrator | 2025-03-11 01:16:52 - testbed-node-3 2025-03-11 01:16:53.019259 | orchestrator | 2025-03-11 01:16:53 - testbed-node-4 2025-03-11 01:16:53.121083 | orchestrator | 2025-03-11 01:16:53 - testbed-node-0 2025-03-11 01:16:53.229499 | orchestrator | 2025-03-11 01:16:53 - testbed-node-5 2025-03-11 01:16:53.319038 | orchestrator | 2025-03-11 01:16:53 - testbed-node-1 2025-03-11 01:16:53.405726 | orchestrator | 2025-03-11 01:16:53 - clean up keypairs 2025-03-11 01:16:53.422093 | orchestrator | 2025-03-11 01:16:53 - testbed 2025-03-11 01:16:53.446593 | orchestrator | 2025-03-11 01:16:53 - wait for servers to be gone 2025-03-11 01:17:04.687920 | orchestrator | 2025-03-11 01:17:04 - clean up ports 2025-03-11 01:17:05.631975 | orchestrator | 2025-03-11 01:17:05 - 19cedf18-35a1-4855-8ad3-d16134ae9f2c 2025-03-11 01:17:05.820626 | orchestrator | 2025-03-11 01:17:05 - 3bd7ac0b-e693-4b77-ac15-aed82ba839e1 2025-03-11 01:17:06.040963 | orchestrator | 2025-03-11 01:17:06 - 4acecba4-7df9-453c-a360-27934f460b78 2025-03-11 01:17:06.242724 | orchestrator | 2025-03-11 01:17:06 - 55f10b90-d868-4c31-bd97-85244ef93c06 2025-03-11 01:17:06.426743 | orchestrator | 2025-03-11 01:17:06 - 86968f54-3f57-47e2-afbf-8a2b1f8955da 2025-03-11 01:17:06.779946 | orchestrator | 2025-03-11 01:17:06 - cbc115d4-afc1-45a7-bf38-5cdbcad5deaf 2025-03-11 01:17:06.961197 | orchestrator | 2025-03-11 01:17:06 - e60803b5-e604-4456-812e-dc25da7bb5d5 2025-03-11 01:17:07.147581 | orchestrator | 2025-03-11 01:17:07 - clean up volumes 2025-03-11 01:17:07.286921 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-0-node-base 2025-03-11 01:17:07.323162 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-2-node-base 2025-03-11 01:17:07.370143 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-5-node-base 2025-03-11 01:17:07.413942 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-3-node-base 2025-03-11 01:17:07.452325 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-1-node-base 2025-03-11 01:17:07.491330 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-4-node-base 2025-03-11 01:17:07.531071 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-manager-base 2025-03-11 01:17:07.569531 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-7-node-1 2025-03-11 01:17:07.611050 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-12-node-0 2025-03-11 01:17:07.649779 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-9-node-3 2025-03-11 01:17:07.689609 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-8-node-2 2025-03-11 01:17:07.727450 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-17-node-5 2025-03-11 01:17:07.768679 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-1-node-1 2025-03-11 01:17:07.810599 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-3-node-3 2025-03-11 01:17:07.862264 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-16-node-4 2025-03-11 01:17:07.906356 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-10-node-4 2025-03-11 01:17:07.953051 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-15-node-3 2025-03-11 01:17:07.993623 | orchestrator | 2025-03-11 01:17:07 - testbed-volume-2-node-2 2025-03-11 01:17:08.042170 | orchestrator | 2025-03-11 01:17:08 - testbed-volume-5-node-5 2025-03-11 01:17:08.084008 | orchestrator | 2025-03-11 01:17:08 - testbed-volume-6-node-0 2025-03-11 01:17:08.139463 | orchestrator | 2025-03-11 01:17:08 - testbed-volume-4-node-4 2025-03-11 01:17:08.183159 | orchestrator | 2025-03-11 01:17:08 - testbed-volume-11-node-5 2025-03-11 01:17:08.224821 | orchestrator | 2025-03-11 01:17:08 - testbed-volume-14-node-2 2025-03-11 01:17:08.267754 | orchestrator | 2025-03-11 01:17:08 - testbed-volume-13-node-1 2025-03-11 01:17:08.309962 | orchestrator | 2025-03-11 01:17:08 - testbed-volume-0-node-0 2025-03-11 01:17:08.350951 | orchestrator | 2025-03-11 01:17:08 - disconnect routers 2025-03-11 01:17:08.464719 | orchestrator | 2025-03-11 01:17:08 - testbed 2025-03-11 01:17:09.120989 | orchestrator | 2025-03-11 01:17:09 - clean up subnets 2025-03-11 01:17:09.173236 | orchestrator | 2025-03-11 01:17:09 - subnet-testbed-management 2025-03-11 01:17:09.311790 | orchestrator | 2025-03-11 01:17:09 - clean up networks 2025-03-11 01:17:09.465050 | orchestrator | 2025-03-11 01:17:09 - net-testbed-management 2025-03-11 01:17:09.762818 | orchestrator | 2025-03-11 01:17:09 - clean up security groups 2025-03-11 01:17:09.792686 | orchestrator | 2025-03-11 01:17:09 - testbed-node 2025-03-11 01:17:09.875127 | orchestrator | 2025-03-11 01:17:09 - testbed-management 2025-03-11 01:17:09.969277 | orchestrator | 2025-03-11 01:17:09 - clean up floating ips 2025-03-11 01:17:10.003939 | orchestrator | 2025-03-11 01:17:10 - 81.163.192.35 2025-03-11 01:17:11.149700 | orchestrator | 2025-03-11 01:17:11 - clean up routers 2025-03-11 01:17:11.193675 | orchestrator | 2025-03-11 01:17:11 - testbed 2025-03-11 01:17:12.279759 | orchestrator | changed 2025-03-11 01:17:12.325354 | 2025-03-11 01:17:12.325455 | PLAY RECAP 2025-03-11 01:17:12.325508 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-03-11 01:17:12.325534 | 2025-03-11 01:17:12.430393 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-11 01:17:12.433382 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-11 01:17:13.123361 | 2025-03-11 01:17:13.123510 | PLAY [Base post-fetch] 2025-03-11 01:17:13.152652 | 2025-03-11 01:17:13.152784 | TASK [fetch-output : Set log path for multiple nodes] 2025-03-11 01:17:13.219913 | orchestrator | skipping: Conditional result was False 2025-03-11 01:17:13.233645 | 2025-03-11 01:17:13.233806 | TASK [fetch-output : Set log path for single node] 2025-03-11 01:17:13.297495 | orchestrator | ok 2025-03-11 01:17:13.306664 | 2025-03-11 01:17:13.306779 | LOOP [fetch-output : Ensure local output dirs] 2025-03-11 01:17:13.778761 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/work/logs" 2025-03-11 01:17:14.055722 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/work/artifacts" 2025-03-11 01:17:14.331725 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b1e5c5de2ce3410bae8409c63759374d/work/docs" 2025-03-11 01:17:14.354956 | 2025-03-11 01:17:14.355185 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-03-11 01:17:15.175602 | orchestrator | changed: .d..t...... ./ 2025-03-11 01:17:15.176010 | orchestrator | changed: All items complete 2025-03-11 01:17:15.176077 | 2025-03-11 01:17:15.750059 | orchestrator | changed: .d..t...... ./ 2025-03-11 01:17:16.311290 | orchestrator | changed: .d..t...... ./ 2025-03-11 01:17:16.345579 | 2025-03-11 01:17:16.345701 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-03-11 01:17:16.389616 | orchestrator | skipping: Conditional result was False 2025-03-11 01:17:16.396499 | orchestrator | skipping: Conditional result was False 2025-03-11 01:17:16.449522 | 2025-03-11 01:17:16.449612 | PLAY RECAP 2025-03-11 01:17:16.449665 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-03-11 01:17:16.449692 | 2025-03-11 01:17:16.554356 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-11 01:17:16.562083 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-11 01:17:17.229564 | 2025-03-11 01:17:17.229719 | PLAY [Base post] 2025-03-11 01:17:17.257788 | 2025-03-11 01:17:17.257916 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-03-11 01:17:18.030230 | orchestrator | changed 2025-03-11 01:17:18.067929 | 2025-03-11 01:17:18.068067 | PLAY RECAP 2025-03-11 01:17:18.068145 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-03-11 01:17:18.068231 | 2025-03-11 01:17:18.179577 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-11 01:17:18.182627 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-03-11 01:17:18.921761 | 2025-03-11 01:17:18.921921 | PLAY [Base post-logs] 2025-03-11 01:17:18.938257 | 2025-03-11 01:17:18.938388 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-03-11 01:17:19.412885 | localhost | changed 2025-03-11 01:17:19.419786 | 2025-03-11 01:17:19.419972 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-03-11 01:17:19.453004 | localhost | ok 2025-03-11 01:17:19.463122 | 2025-03-11 01:17:19.463326 | TASK [Set zuul-log-path fact] 2025-03-11 01:17:19.482653 | localhost | ok 2025-03-11 01:17:19.498948 | 2025-03-11 01:17:19.499057 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-11 01:17:19.534768 | localhost | skipping: Conditional result was False 2025-03-11 01:17:19.538969 | 2025-03-11 01:17:19.539096 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-11 01:17:19.580899 | localhost | ok 2025-03-11 01:17:19.587295 | 2025-03-11 01:17:19.587456 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-11 01:17:19.626616 | localhost | skipping: Conditional result was False 2025-03-11 01:17:19.635464 | 2025-03-11 01:17:19.635659 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-03-11 01:17:19.662361 | localhost | skipping: Conditional result was False 2025-03-11 01:17:19.671819 | 2025-03-11 01:17:19.672047 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-03-11 01:17:19.699136 | localhost | skipping: Conditional result was False 2025-03-11 01:17:19.707691 | 2025-03-11 01:17:19.707873 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-03-11 01:17:19.744137 | localhost | skipping: Conditional result was False 2025-03-11 01:17:19.757324 | 2025-03-11 01:17:19.757489 | TASK [upload-logs : Create log directories] 2025-03-11 01:17:20.275042 | localhost | changed 2025-03-11 01:17:20.279381 | 2025-03-11 01:17:20.279484 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-03-11 01:17:20.814445 | localhost -> localhost | ok: Runtime: 0:00:00.007084 2025-03-11 01:17:20.819646 | 2025-03-11 01:17:20.819764 | TASK [upload-logs : Upload logs to log server] 2025-03-11 01:17:21.382770 | localhost | Output suppressed because no_log was given 2025-03-11 01:17:21.390633 | 2025-03-11 01:17:21.390844 | LOOP [upload-logs : Compress console log and json output] 2025-03-11 01:17:21.464976 | localhost | skipping: Conditional result was False 2025-03-11 01:17:21.481120 | localhost | skipping: Conditional result was False 2025-03-11 01:17:21.499345 | 2025-03-11 01:17:21.499535 | LOOP [upload-logs : Upload compressed console log and json output] 2025-03-11 01:17:21.572918 | localhost | skipping: Conditional result was False 2025-03-11 01:17:21.573559 | 2025-03-11 01:17:21.585509 | localhost | skipping: Conditional result was False 2025-03-11 01:17:21.596353 | 2025-03-11 01:17:21.596577 | LOOP [upload-logs : Upload console log and json output]